We live in a world where digital technology connects everything around us. Businesses, institutions, individuals, and even governments use digital technology to communicate, transfer data, and conduct complex tasks and procedures. However, as much as the rapid technological advances have contributed to the improvements in the quality of life, they have also brought about numerous risks.
Hackers have recognized computer technology as a suitable environment for criminal activity. Thus, they learned to exploit technological resources to their benefit through data breaches, identity theft, and other forms of cyber crime. The question we want to cover is how did it all begin? We want to present a brief history on how hacking became so widespread in hopes to emphasize the importance of creating a safer future on the internet using the NIST framework and CIS 20 controls.
A Brief Trip Down the Memory Lane
The term “hacking” was first coined back in the 1960s when it had quite a positive meaning. It was used to refer to certain modifications that MIT model train enthusiasts employed on model trains. These experienced engineers came up with ways to change the function of a model train without physically re-engineering the entire device. This kind of operation might sound simple to us right now but it was a very big deal back in the 60s. The MIT experts continued perfecting their methods using the early computer systems.
During this period, the term “hack” referred to an easy solution for a problem or a way to improve the functions of a device. The public accepted it as a positive term until the 1970s when it was first associated with malicious intentions. At this time, a group of tech enthusiasts called “phreakers” came to a realization that they can create codes that provide them access to free long distance telephone service. Phreakers used different methods to attain secret information from the Bell Telephone company in order to find ways to modify their software and illegally gain access to free services.
They even dug through the company’s garbage and performed experiments to gather enough data for their illegal acts. As technology further evolved, hackers found new opportunities for profitable cyber crimes. With the early computer systems becoming more vulnerable, it was time to respond to criminal activities. In 1986, Clifford Stall, the systems administrator at the Lawrence Berkeley National Laboratory, introduced the first digital forensic technique that was used to determine whether an unauthorized user had access to the system. The technique contributed to the arrest of Markus Hess and a group of hackers located in West Germany who were illegally selling military data.
Many similar incidents followed throughout the next decade. The most significant was certainly the Morris worm virus that caused over $98 million in damages and infected over 6,000 computers. The Congress first responded to cyber crime in 1986 when they issued the Federal Computer Fraud and Abuse Act. The Act made hacking a punishable crime and condemned hackers to jail time or monetary fines. However, the battle between hackers and authorities continued for several decades and, with each new technological improvement, hackers seemed to become stronger and wiser.
Back to the Reality
Fast forward to today, cyber crime is more common than it ever was in any point in history. Despite all the efforts to put an end to criminal activities online, hackers have found ways to hide from the authorities and remain anonymous. It is obvious that cyber crime will always be present. It is up to us to take the right safety measures to ensure our data and devices are protected. For that purpose, we highly recommend complying to the NIST framework and its rules and regulations regarding cybersecurity in general!