FTC 16 CFR Part 255 Disclosure

Since 2006, iTechGuide.com has provided independent reviews. Our reviews, ratings and awards are not based on any incentives or commissions. Notwithstanding, to keep our service online we accept compensation from some of the companies whose products we review within and outside of the IT industry, including, but not limited to, paid advertising placements, referral fees, and in-content advertising links.

Security

Internet Security Demystified

Everyone who uses the Internet has heard the stories of compromised Pentagon computers, millions of stolen password, denial of service attacks and more. So what causes the Internet to be so insecure? This article attempts to shed light on the evolution of Internet security issues.

Genesis

American taxpayers paid for the development of the Internet under the large umbrella of the Department of Defense (DoD); more specifically the Defense Advanced Research Planning Agency or DARPA funded the necessary research at universities and private corporations. Our military had some very basic requirements at the time. Computers made by Company A needed to be able to exchange information with computers made by Company B. This requirement arose because Congress had mandated that DoD use a competitive bidding process for procurements to insure that the low bidder won the contract. Consequently, the DoD was home to every kind of computer made and none of them talked to each other. One other Internet design requirement imposed by the military was that the Internet should be robust enough to operate during wartime when many of the telephone lines that carried military communications (voice and data) could be bombed out of existence. As it turned out, this requirement for “survivability” meant that the technical architecture of the Internet needed to provide ways for data to be “dynamically rerouted” via whatever links were not bombed out to insure that the message eventually had the greatest chance of reaching the intended destination. As we will see later, this requirement imposed such unique design requirements that the military willingly traded off poor security for a higher probability of deliver.

Who Needs Security Anyway?

There was of course a great reason why the most powerful military in history willingly traded off security for survivability. Surprising, the answer was because transmission security was not really needed! This is because the military has long employed encryption capabilities on all of its communications links to prevent an enemy from intercepting transmissions. With encryption capabilities already in place the computers could effectively be “relieved” of the need for concern about security. This drove the design requirements of the Internet protocols, which are effectively the language used by the equipment within the Internet.

Internet Design

To understand why the Internet is so insecure you have to actually consider the rules of communication used between pieces of equipment. Actually, understanding just a few of the design choices goes a long way in understanding Internet security. Since the DoD was already using systems that scrambled up everything transmitted, the Internet Protocol design could be free to use the lowest overhead communication of all – namely “plain text.” Plain text protocol design essentially means that all of the communication rules are built around transmissions that anyone can simply read like today's newspaper. Without the encryption devices present, credit cards, email messages, entire file transfers, chat sessions and every other application exchange are as easy to read as today's newspaper. Of course, that doesn't apply to the DoD because their links employ encryption.

Another interesting design decision employed within the Internet protocols is best understood by the “survivability” requirement. Instead of sending all of the information via a “dedicated link” the Internet protocols chop the data up into small pieces which travel independently over whatever link is up and are put back together again in the proper order by the receiving system. Since it is possible during wartime for many different paths to be out of commission it was necessary to define timers that allowed incredibly long periods (in computer processing time) of time for each piece of information to arrive. Under conditions where security was not taken off the table as a requirement, protocol timers would be expected to be set in computer time, which is milliseconds. But if security is not a concern its possible to define timers that allow say 20 minutes to pass without the sender or receiver tearing down the connection. The consequence of this, however, is that a human hacker has all of the time in the world to manipulate the exchange of information so it really isn't even necessary to automate an attack because Internet systems will just “assume” the transmitter is operating under severely degraded conditions.

Really Open Systems

These two characteristics of the Internet, plain text transmissions and almost unlimited (in computer time) timers make the Internet incredibly insecure for anyone who is not using encryption on their transmissions. And since the Internet is an “open system” environment the documents that define the required protocol exchanges between any two applications are defined and published for everyone with an interest to read by the Internet Engineering Task Force (IETF). From a security perspective this is a bit like the Bank of America publishing the combination for all of its safes in the New York times but from an engineering perspective this greatly helps to rapidly deploy new Internet applications.

Shhhh...That's a Secret

Why do we hear about Pentagon computer break-ins if the military has encryption on all its systems? Ah, the truth is that not every computer used by the military has the level of sensitive information required to justify encryption protection. Even though the military, and most Federal Government agencies, view everything as “For Official Use Only” the truth is that someone breaking into a computer in the Press Release office in the Pentagon is not really going to obtain any secretive information anyway. Sometimes, such disclosed “break ins” are little more than a bureaucrat trying to justify a larger budget for the office.

Theft By Any Other Name

What about hacking account passwords at banks? Yes, that is legitimate theft of corporate property.  In comparison, however, let's imagine a similar situation at the level of an individual. Let's say you visit Central Park in New York City and sit down on a bench and spent some time cleaning out your wallet. You decide a cup of coffee would be nice so you place your wallet down on the park bench and stroll leisurely across the street to a coffee shop. You buy the coffee and head back to the bench where you expect your wallet will still be sitting just where you left it and no one would dare even take a peek inside because its your personal property, right? Absurd? Yes, very! Consider then how the government has spent millions and millions of dollars building sophisticated monitoring systems over its Internet protocol networks and then voraciously prosecuted teenage kids who dared to take a peek at computer systems that had their data hanging out on the Internet for anyone who cared to read it. Well of course breaking into any computer should be illegal based on the morality that stealing is wrong but it seems that it should be equally wrong for billion dollar corporations and governments, both of which employ the highest educated computer experts possible, from putting their sensitive computers on the Internet in the first place. The hackers have been vilified as some type of genius level computer guru who thwarted the best security experts in the world when in fact they interacted with systems that were all too anxious to hand over any and all requests for information without even so much as a timer set on how fast the hacker should type!

Have Glue, Will Stick

Fortunately, industry came along many years later with add-on security tools that allow information such as credit card accounts to use lightweight quality encryption such as Secure Socket Layer (SSL), Transport Layer Security (TLS), and other capabilities that enabled electronic commerce to flourish on the Internet.  Other than these features, however, the Internet still operates like the fully open system it was designed to be.

About the Author

Jason Canon has over 30 years experience in the computer industry and served as a voting member of the Federal Internetworking Requirements Panel.

Computer Security Authentication by Kent Pinkerton

Computer security authentication means verifying the identity of a user logging onto a network. Passwords, digital certificates, smart cards and biometrics can be used to prove the identity of the user to the network. Computer security authentication includes verifying message integrity, e-mail authentication and MAC (Message Authentication Code), checking the integrity of a transmitted message. There are human authentication, challenge-response authentication, password, digital signature, IP spoofing and biometrics.

Human authentication is the verification that a person initiated the transaction, not the computer. Challenge-response authentication is an authentication method used to prove the identity of a user logging onto the network. When a user logs on, the network access server (NAS), wireless access point or authentication server creates a challenge, typically a random number sent to the client machine. The client software uses its password to encrypt the challenge through an encryption algorithm or a one-way hash function and sends the result back to the network. This is the response.

Two- factor authentication requires two independent ways to establish identity and privileges. The method of using more than one factor of authentication is also called strong authentication. This contrasts with traditional password authentication, requiring only one factor in order to gain access to a system. Password is a secret word or code used to serve as a security measure against unauthorized access to data. It is normally managed by the operating system or DBMS. However, a computer can only verify the legality of the password, not the legality of the user.

The two major applications of digital signatures are for setting up a secure connection to a website and verifying the integrity of files transmitted. IP spoofing refers to inserting the IP address of an authorized user into the transmission of an unauthorized user in order to gain illegal access to a computer system.

Biometrics is a more secure form of authentication than typing passwords or even using smart cards that can be stolen. However, some ways have relatively high failure rates. For example, fingerprints can be captured from a water glass and fool scanners.

Quantum Cryptography Email Encryption

Encrypted email using public private key pairs as in PGP is not entirely secure. It is claimed that government organisations are able to crack it and there is always the danger of private keys falling into the hands of hackers. If a unique key could be generated every time a message was sent, and that key could be exchanged between sender and recipient without any danger of interception, then the message could be entirely secure. That is the principal behind quantum cryptography.

The principles of quantum encryption have been with us for some time, and new approaches are frequently appearing in scientific publications. One of the latest iterations on this subject was published by a research team from Toronto University who have claimed that their approach is entirely secure and indecipherable. This is how it works.

The essence of quantum cryptography lies in the ability to securely distribute a quantum key between two parties (quantum key distribution) which cannot be detected by an eavesdropper. The bits of the key are encoded as quantum data.

When quantum cryptography was invented it was considered to be an entirely foolproof way to of preventing hacking and encrypting email. This is because if anyone eavesdropped on the message the quantum entanglement would collapse and this would be apparent to the legitimate sender and recipient. That means that the encryption key can be transmitted entirely securely between two users.

However there is a fundamental flaw in this reasoning. The key is transmitted using photons which are received by photon detectors, and it is conceivable that these signals could be intercepted and manipulated by a hacker.

This kind of hacking is called a side-channel attack, and it has been acknowledged by the inventor of quantum cryptography, Dr. Charles Bennett of IBM. When a side-channel attack is launched, the photon detectors are subverted by light signals, so they detect only the photons that the hacker wants the recipient to see.

In the latest approach a solution to this problem has been identified. This is known as "Measurement Device Independent QKD". Although the hacker can operate the photon detectors and send the measurement results, all the recipient need do to detect this is to compare their own data. The key is the detection of small changes that happen during quantum data manipulation.

Sender and recipient send their signals to another photon detector that might be controlled by the hacker; that carries out a joint measurement which provides another data point, and that is adequate to ensure the security of the photon detectors. So far some experiments have supported the theory and a prototype system is being produced which should be ready in the next five years.

This is a guest post by Adam a new Londoner, who has interests in recruitment, all things techy, a passion for travel and a love of fashion. He blogs about recruitment, travel and IT/technology as well as latest trends in mens and womens fashion. If you want Adam to write you specific content, feel free to message me on Twitter (@NewburyNewbie).

 

Need Help Defeating Denial of Service Attacks?

On March 27, 2013 Business Insider reported that the biggest cyber attack in history, a distributed denial of service attack (DDoS), was taking place. The result reported was that Internet speeds around the world slowed noticeably. The first wave of these attacks began back on March 18th. Identified as Open System Interconnection (OSI) Layer 3 attacks, they were focused on the Domain Name Service (DNS) servers operated by the non-profit anti-spam organization Spamhaus. Spamhaus provides DNS services the loss of which destabilized major portions of the Internet. The level of the attack recorded was at a sustained level of 300 gigabits per second! Were it not for the distributed structure of Spamhaus operations it likely would have been completely taken off line. The attack was alleged to have been concocted after Spamhaus blocked a dutch web host company named Cyberbunker in an effort to weed out spammers. Spamhaus subsequently accused Cyberbunker of working with Russian and Eastern European criminal organizations to facilitate the attack.

Cisco backbone router image

The largest source of the attack traffic against Spamhaus came from DNS reflection. The basic idea of a DNS reflection attack is to send a source IP spoofed request for a large DNS zone file transfers to a large number of open DNS resolvers. The resolvers respond by transferring the large DNS zone files to the intended victim, in this case to Spamhaus. The attackers requests themselves are only a small fraction of the size of the responses, which enables the attackers to amplify the attack many times beyond the bandwidth of the attacker. Requests were approximately 36 bytes long while the response was approximately 3,000 bytes translating to an amplification factor of 100x. In addition to the DNS reflection, the attackers also threw in an TCP ACK reflection attack where the attacker sends a number of SYN packets to servers with spoofed source IP addresses that point back to the intended victim. The ACKs are symmetrical to the bandwidth owned by the attacker, however, so there is no amplification factor benefit.

There is very little any operational administrator can do when your routers are being asked to process more data than will fit into the pipe. Back in May of 2000 Internet Request For Comments (RFC) 2827, Best Current Practice was published with the title: “Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing.” The RFC describes a network ingress filtering method that can prohibit an DDoS attack from being launched from within the network of an Internet connectivity provider. The purpose of the filters is to preclude an attacker from launching spoofed IP address attacks. Where an Internet connectivity provider is aggregating routing announcements for multiple downline networks then strict traffic shaping would be used to prohibit traffic which appears to have been originated from outside the aggregated announcements. Thus, an attacker would have to launch attacks using their true source IP address, which would rapidly serve to identify the assailants.

This is not to say that ingress filtering is a panacea agains IP spoofed DDoS attacks. Indeed, it does not preclude an attacker who is using a forged source address of another legitimate host within the permitted prefix filter range. However, this requires additional homework on the part of the attacker and the length of time it requires before the attacker is discovered is significantly reduced. Also, the administrator under attack could take concrete steps to stop the attack in progress without affecting other visitors.

Security Glossary, Version 2

RFC 4949 was published as a major revision and expansion of the Internet Security Glossary contained in RFC 2828. The Glossary provides definitions, abbreviations, and explanations of terminology for information system security. The 334 pages of entries offer recommendations to improve the comprehensibility of written material that is generated in the Internet Standards Process. The recommendations follow the principles that such writing should (a) use the same term or definition whenever the same concept is mentioned; (b) use terms in their plainest, dictionary sense; (c) use terms that are already well-established in open publications; and (d) avoid terms that either favor a particular vendor or favor a particular technology or mechanism over other, competing techniques that already exist or could be developed.