Security & Law Enforcement Risks - Portable USB Devices by Michael R. Anderson
It wasn't that many years ago when we were limited to cassette tapes and 160k floppy diskettes to store our important computer data. I remember when I bought my first computer....a trusty Tandy TRS 80 Model III. Back in the late 1970's, when that computer was in its prime, you stored your important files on cassette tapes. When the 160k floppy disk drives became available, I thought I was in "geek heaven". I remember wondering, "How will I ever fill a 160k floppy diskette with data?" Then when five megabyte hard disk drives became available in the early 1980's I was really blown away. We never imagined that hard disk drive sizes would eventually grow into the multiple-gigabyte storage range and floppy diskette storage would eventually exceed 2 megabytes. Portable flash memory devices were beyond the comprehension of most computer users back then. Boy have things changed!
Thanks to the popularity of digital photography and the advent of flash memory chips, computer storage devices today are compact and their storage capacities can exceed one gigabyte. In just two decades computer technology advances have made it possible for us to store the data capacity equivalent of two hundred 1980 vintage hard disk drives in tiny portable devices that will easily fit in the palm of a hand. Some of these devices pull double duty and have been configured into key chains, pens, watches and even a Swiss Army knife. These portable storage devices make it convenient for us to backup important computer files and to transfer data from one computer to another using USB technology. However, they also create significant security risks for government and corporate employers because proprietary and/or classified data can easily find its way onto these devices. Granted corporate and government policies many times forbid the use of these devices but are the policies followed? The insider threat is a real one concerning the unauthorized copying and storage of proprietary corporate data, e.g., client databases, bids, insider information and research and development data. Private sector government contractors typically have access to classified government data and information. Can you imagine the problems created when classified data migrates onto these portable flash memory devices? Needless to say, these new storage technologies have a high "wow" factor for those of us who live and breath computer technology. But, the same devices are causing CIO's and CSO's to rethink their internal security policies and the nature of their internal threats.
With computer security and the insider risk in mind, take a look of a sampling of flash memory- based USB devices that are currently available in the marketplace. Examples of compact storage devices:
These graphics were created and donated for use by NTI's clients by Dr. Henry B. Wolfe, Associate Professor, Computer Security & Forensics, Information Science Department, School of Business, University of Otago, Dunedin, New Zealand. The illustrations came into being because of his research tied to potential business risks. It is important to note that much of the computer security and business liability research world-wide is being done by universities in their schools of business. This is because computer technology advances have come with a mixed blessing and many new risks and liabilities have been created for businesses in recent times.
Take for example these beautiful flash memory-based executive pens:
Food for thought.....
* Will one of these USB-based storage device pens be overlooked by law enforcement officers during the execution of a search warrant in a computer related investigation? Without proper training and an awareness of current technologies, I think this is likely.
* Will a probation and parole officer consider these as prohibited items for use by a convicted sex offender? Without training and awareness, I think not. They each have a storage capacity of approximately 256 million bytes and each can easily store 200 images of child pornography within the flash memory chips contained in the body of the pens.
* Will the security officers at the entrance and exit points of a classified government facility consider these to be banned items and a potential security risk? In most classified facilities they are well aware of computers, cell phones, digital cameras and weapons but it is my guess that these will pass right through security stations without detection.
* Would things have been different if this technology had existed years ago in the Hanssen spy case? You might recall the case of FBI Special Agent Robert Hanssen. He was alleged to be a spy for the Soviet Union and allegedly stole classified information from the FBI's computers and files for more than a decade. It is my guess that these devices would have been very helpful in the theft of U. S. government secrets in that case.
I have one of these flash memory pens and it is very effective in disguising the fact that it is a large capacity external flash memory storage device. The one that I have was made by PNY Technologies and it was purchased for under $60. It has a storage capacity of 128 megabytes but I could have paid a few more dollars for one with a storage capacity of 256 megabytes. The device that I purchased also functions as a beautiful executive pen which I use on a daily basis. Although this portable storage device has a high "wow" factor, it is also scary from a computer security risk standpoint. A disgruntled employee armed with one of these pens, could easily steal company data. The same could be true of a contractor, e.g., a janitor or repair man. When Windows 2000 and Windows XP-based systems are involved, the pen automatically interacts with the computer through a USB port and no installation of special software or drivers is required. These operating systems automatically recognize the device as a remote storage device and files can easily be copied from the system to the device under DOS or through the GUI interface. Granted passwords are required to log onto these systems but if a system is left running, it could easily be compromised. The same would be true if passwords are written near the computer keyboard which I know to be the case in some government and private sector office environments.
The same security threats could exist with the cool flash memory based watch that is illustrated below. Most people wear watches in the workplace and these devices could easily go unnoticed in most businesses and government agencies. Most people are not aware that this beautiful watch also doubles as a mass computer data storage device which is capable of storing over 256 megabytes of computer data. I don't have one of these yet, but my wife may find it on my Christmas list. (;^)
My intention in writing this article is to provide a wake-up call for law enforcement and security officials. Also, be aware that NTI covers these security risks and others in its popular 5 Day Computer Forensics Training Course. Unfortunately there is no magic remedy that will resolve the insider threat posed by these new computer storage technologies. Awareness, policies and policy enforcement are really the only answer when it comes to insider threats tied to portable USB devices. For more information about flash memory storage devices, please review the articles posted on NTI's web site at http://www.forensics-intl.com/art16.html and http://www.forensics-intl.com/art23.html .
Computer security used to be simple, says Phil Farrell, computer systems manager for the School of Earth Sciences. "When I started here in 1985, nobody thought about security -- other than to make sure that users didn't do anything to accidentally screw things up." There were a couple of hundred computers on the entire Internet then, the term "hacker" was just beginning to be used as a pejorative and worms were something that one worried about biting into in an apple.
Eighteen years and millions of computers later, computer security issues have grown exponentially more complex. On the campus alone, approximately 40,000 computers are connected to the Internet, said Ced Bennett, director of information security services for Information Technology Systems and Services (ITSS). In cyberspace, thousands of automated programs run day and night, constantly checking computers for a way to break in. An average computer is tested for vulnerabilities -- including software that's not been updated or easily guessed passwords -- once every six minutes, Bennett said. "It's like the Wild West. Everyone has a six-gun and is looking for someone to have a fight with."
Once a hacker finds a way into a computer, everything on it could be wiped out or it can be used to launch attacks on other computers and systems, Bennett said. By linking computers together, hackers aggregate enough processing power to overwhelm and cripple targeted systems.
"Worms" like the Sapphire code that affected 700,000 computers worldwide in late January exploit weaknesses in software and spread by automatically replicating themselves. While Sapphire shut down computer networks at the University of Pennsylvania and Ohio State University, ITSS personnel were able to fairly quickly isolate and fix affected networks, Bennett said. "When it reached computers, it didn't do any damage -- but it easily could have."
Stanford's network is particularly hard to secure because it's intentionally open, Bennett said. Unlike profit-making institutions, which close their networks, Stanford considers the open flow of information a fundamental part of its research and educational mission, he said. "It's part of our culture."
The recent switch from administrative systems that operate on mainframe computers to ones run on server-based machines also has heightened computer security risks, he said. "It's not that the mainframe was any safer, but it was more obscure," Bennett said. "It wasn't like having your systems on the Internet."
On the Internet, "the number of people who could come at you is not infinite, but it's huge," said Tina Darmohray, a computer security specialist who does security consulting and client outreach for ITSS. "That is the challenge for anything you put on the Internet."
As administrative systems have moved to a place where there is more risk of break-in, the security team has changed its focus from responding to security incidents to preventing them, Bennett said. "The realization [of the risk] became clearer a year ago."
In a pilot project started last year in the dormitories, ITSS began scanning every computer to look for missing or easily guessed passwords, like "user" or "beatcal" or "four-letter-word-cal," Bennett said. The test found 200 such "bad" passwords in the first 4,500 machines that were scanned, he said. "Five percent is a lot."
Since then, ITSS has begun a program to systematically scan campus computers for bad or nonexistent passwords and let users know when they find one. The good news -- "the amazing news" -- is that their efforts have made a difference in the number of break-ins, Darmohray said. "We've raised the consciousness just a little that computer security is necessary." Later this year, ITSS will launch a security awareness campaign designed to educate campus users about computer security (see sidebar).
In addition to outside risks to security, attention has to be paid to the damage that can come from people misusing systems inside the university, Bennett said. "It's known that most bad things that happen, happen inside a company," he said.
It's difficult to calculate the costs of computer break-ins to the university, he said. It can take days to reload systems on computers that have been broken into or to resurrect toppled systems, but those costs pale beside the cost to Stanford's reputation if it were to sustain a catastrophic security breach.
Before the 1906 earthquake, Stanford had achieved the status of a top 10 university, he said. After the earthquake hit, it took a decade for the university to regain its status. A serious security breach "could be equally catastrophic," he said. SR
Computer Security Ethics and Privacy by: Vincent Q. Deguzman
Today, many people rely on computers to do homework, work, and create or store useful information. Therefore, it is important for the information on the computer to be stored and kept properly. It is also extremely important for people on computers to protect their computer from data loss, misuse, and abuse. For example, it is crucial for businesses to keep information they have secure so that hackers can’t access the information. Home users also need to take means to make sure that their credit card numbers are secure when they are participating in online transactions. A computer security risk is any action that could cause lost of information, software, data, processing incompatibilities, or cause damage to computer hardware, a lot of these are planned to do damage. An intentional breach in computer security is known as a computer crime which is slightly different from a cypercrime. A cybercrime is known as illegal acts based on the internet and is one of the FBI’s top priorities. There are several distinct categories for people that cause cybercrimes, and they are refereed as hacker, cracker, cyberterrorist, cyberextortionist, unethical employee, script kiddie and corporate spy. The term hacker was actually known as a good word but now it has a very negative view. A hacker is defined as someone who accesses a computer or computer network unlawfully. They often claim that they do this to find leaks in the security of a network. The term cracker has never been associated with something positive this refers to someone how intentionally access a computer or computer network for evil reasons. It’s basically an evil hacker. They access it with the intent of destroying, or stealing information. Both crackers and hackers are very advanced with network skills. A cyberterrorist is someone who uses a computer network or the internet to destroy computers for political reasons. It’s just like a regular terrorist attack because it requires highly skilled individuals, millions of dollars to implement, and years of planning. The term cyperextortionist is someone who uses emails as an offensive force. They would usually send a company a very threatening email stating that they will release some confidential information, exploit a security leak, or launch an attack that will harm a company’s network. They will request a paid amount to not proceed sort of like black mailing in a since. An unethical employee is an employee that illegally accesses their company’s network for numerous reasons. One could be the money they can get from selling top secret information, or some may be bitter and want revenge. A script kiddie is someone who is like a cracker because they may have the intentions of doing harm, but they usually lack the technical skills. They are usually silly teenagers that use prewritten hacking and cracking programs. A corporate spy has extremely high computer and network skills and is hired to break into a specific computer or computer network to steal or delete data and information. Shady companies hire these type people in a practice known as corporate espionage. They do this to gain an advantage over their competition an illegal practice. Business and home users must do their best to protect or safeguard their computers from security risks. The next part of this article will give some pointers to help protect your computer. However, one must remember that there is no one hundred percent guarantee way to protect your computer so becoming more knowledgeable about them is a must during these days. When you transfer information over a network it has a high security risk compared to information transmitted in a business network because the administrators usually take some extreme measures to help protect against security risks. Over the internet there is no powerful administrator which makes the risk a lot higher. If your not sure if your computer is vulnerable to a computer risk than you can always use some-type of online security service which is a website that checks your computer for email and internet vulnerabilities. The company will then give some pointers on how to correct these vulnerabilities. The Computer Emergency Response Team Coordination Center is a place that can do this. The typical network attacks that puts computers at risk includes viruses, worms, spoofing, Trojan horses, and denial of service attacks. Every unprotected computer is vulnerable to a computer virus which is a potentially harming computer program that infects a computer negatively and altering the way the computer operates without the user’s consent. Once the virus is in the computer it can spread throughout infecting other files and potentially damaging the operating system itself. It’s similar to a bacteria virus that infects humans because it gets into the body through small openings and can spread to other parts of the body and can cause some damage. The similarity is, the best way to avoid is preparation. A computer worm is a program that repeatedly copies itself and is very similar to a computer virus. However the difference is that a virus needs o attach itself to an executable file and become a part of it. A computer worm doesn’t need to do that I seems copies to itself and to other networks and eats up a lot of bandwidth. A Trojan Horse named after the famous Greek myth and is used to describe a program that secretly hides and actually looks like a legitimate program but is a fake. A certain action usually triggers the Trojan horse, and unlike viruses and worms they don’t replicate itself. Computer viruses, worms, and Trojan horses are all classifies as malicious-logic programs which are just programs that deliberately harms a computer. Although these are the common three there are many more variations and it would be almost impossible to list them. You know when a computer is infected by a virus, worm, or Trojan horse if one or more of these acts happen: * Screen shots of weird messages or pictures appear. * You have less available memory then you expected. * Music or sounds plays randomly. * Files get corrupted. * Programs are files don’t work properly. * Unknown files or programs randomly appear. * System properties fluctuate.
Computer viruses, worms, and Trojan horses deliver their payload or instructions through four common ways. One, when an individual runs an infected program so if you download a lot of things you should always scan the files before executing, especially executable files. Second, is when an individual runs an infected program. Third, is when an individual bots a computer with an infected drive, so that’s why it’s important to not leave media files in your computer when you shut it down. Fourth is when it connects an unprotected computer to a network. Today, a very common way that people get a computer virus, worm, or Trojan horse is when they open up an infected file through an email attachment. There are literally thousands of computer malicious logic programs and new one comes out by the numbers so that’s why it’s important to keep up to date with new ones that come out each day. Many websites keep track of this. There is no known method for completely protecting a computer or computer network from computer viruses, worms, and Trojan horses, but people can take several precautions to significantly reduce their chances of being infected by one of those malicious programs. Whenever you start a computer you should have no removable media in he drives. This goes for CD, DVD, and floppy disks. When the computer starts up it tries to execute a bot sector on the drives and even if it’s unsuccessful any given various on the bot sector can infect the computer’s hard disk. If you must start the computer for a particular reason, such as the hard disk fails and you are trying to reformat the drive make sure that the disk is not infected.
[Ed. Note: For security, I use several programs, Zone Alarm, AVG, Pest Patrol, Spyware Sweeper, Spyware terminator.]
Computer security is a branch of technology known as information security as applied to computer(s). The objective of computer security can include protection of information from theft or corruption, or the preservation of availability, as defined in the security policy.
Computer security imposes requirements on computers that are different from most system requirements because they often take the form of constraints on what computers are not supposed to do. This makes computer security particularly challenging because it is hard enough just to make computer programs do everything they are designed to do correctly. Furthermore, negative requirements are deceptively complicated to satisfy and require exhaustive testing to verify, which is impractical for most computer programs. Computer security provides a technical strategy to convert negative requirements to positive enforceable rules. For this reason, computer security is often more technical and mathematical than some computer science fields.[clarification needed][citation needed]
Typical approaches to improving computer security (in approximate order of strength) can include the following:
* Physically limit access to computers to only those who will not compromise security. * Hardware mechanisms that impose rules on computer programs, thus avoiding depending on computer programs for computer security. * Operating system mechanisms that impose rules on programs to avoid trusting computer programs. * Programming strategies to make computer programs dependable and resist subversion.
Contents [hide]
* 1 Hardware mechanisms that protect computers and data * 2 Secure operating systems * 3 Security architecture * 4 Security by design o 4.1 Early history of security by design * 5 Secure coding * 6 Capabilities vs. ACLs * 7 Applications o 7.1 In aviation + 7.1.1 Notable system accidents * 8 Terminology * 9 Notes * 10 References * 11 See also
[edit] Hardware mechanisms that protect computers and data
Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such as dongles may be considered more secure due to the physical access required in order to be compromised.
While many software based security solutions encrypt the data to prevent data from being stolen, a malicious program may corrupt the data in order to make it unrecoverable or unusable. Hardware-based security solutions can prevent read and write access to data and hence offers very strong protection against tampering.
[edit] Secure operating systems Main article: Secure operating systems
One use of the term computer security refers to technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such a Computer security policy is the Bell-La Padula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the memory management unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical.
Systems designed with such methodology represent the state of the art[clarification needed] of computer security although products using such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information, military secrets, and the data of international financial institutions. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security functionality and assurance level (such as EAL levels), and these are specified in a Protection Profile for requirements and a Security Target for product descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria.
In USA parlance, the term High Assurance usually suggests the system has the right security functions that are implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can protect less valuable information, such as income tax information. Secure operating systems designed to meet medium robustness levels of security functionality and assurance have seen wider use within both government and commercial markets. Medium robust systems may provide the same security functions as high assurance secure operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are used not only to protect the data stored on these systems but also to provide a high level of protection for network connections and routing services.
[edit] Security architecture Main article: Security architecture
Security Architecture can be defined as the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes, among them confidentiality, integrity, availability, accountability and assurance."[1]. In simpler words, a security architecture is the plan that shows where security measures need to be placed. If the plan describes a specific solution then, prior to building such a plan, one would make a risk analysis. If the plan describes a generic high level design (reference architecture) then the plan should be based on a threat analysis.
[edit] Security by design Main article: Security by design
The technologies of computer security are based on logic. There is no universal standard notion of what secure behavior is. "Security" is a concept that is unique to each situation. Security is extraneous to the function of a computer application, rather than ancillary to it, thus security necessarily imposes restrictions on the application's behavior.
There are several approaches to security in computing, sometimes a combination of approaches is valid:
1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity). 2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example). 3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity). 4. Trust no software but enforce a security policy with trustworthy mechanisms.
Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four.
There are myriad strategies and techniques used to design security systems. There are few, if any, effective strategies to enhance security after design.
One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.
The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.
[edit] Early history of security by design
The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has become widely known as a non-terminating process that fails to produce computer security. This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.
[edit] Secure coding Main article: Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, et al.). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion.
In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion.
Recently another bad coding practice has come under scrutiny; dangling pointers. The first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable. [2]
In summary, 'secure coding' can provide significant payback in low security operating environments, and therefore worth the effort. Still there is no known way to provide a reliable degree of subversion resistance with any degree or combination of 'secure coding.'
[edit] Capabilities vs. ACLs Main articles: Access control list and Capability (computers)
Within computer systems, the two fundamental means of enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems and commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language.
First the Plessey System 250 and then Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.
The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most security comes from operating systems where security is not an add-on, such as OS/400 from IBM. This almost never shows up in lists of vulnerabilities for good reason. Years may elapse between one problem needing remediation and the next.
A good example of a secure system is EROS. But see also the article on secure operating systems. TrustedBSD is an example of an open source project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.
[edit] Applications
Computer security is critical in almost any technology-driven industry which operates on computer systems. The issues of computer based systems and addressing their countless vulnerabilities are an integral part of maintaining an operational industry. [3]
[edit] In aviation
The aviation industry is especially important when analyzing computer security because the involved risks include expensive equipment and cargo, transportation infrastructure, and human life. Security can be compromised by hardware and software malpractice, human error, and faulty operating environments. Threats that exploit computer vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction, and human error. [4]
The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as data theft or loss, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life. Military systems that control munitions can pose an even greater risk.
A proper attack does not need to be very high tech or well funded for a power outage at an airport alone can cause repercussions worldwide. [5]. One of the easiest and, arguably, the most difficult to trace security vulnerabilities is achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having altered flight courses of commercial aircraft and caused panic and confusion in the past. Controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's sight controllers must rely on periodic radio communications with a third party.
Lightning, power fluctuations, surges, brown-outs, blown fuses, and various other power outages instantly disable all computer systems, since they are dependent on an electrical source. Other accidental and intentional faults have caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable communication and electrical power only jeopardizes computer safety.
[edit] Notable system accidents
In 1994, over a hundred intrusions were made by unidentified hackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horse viruses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user. [6] Now, a technique called Ethical hack testing is used to remediate these issues.
Electromagnetic interference is another threat to computer safety and in 1989, a United States Air Force F-16 jet accidentally dropped a 230 kg bomb in West Georgia after unspecified interference caused the jet's computers to release it. [7]
A similar telecommunications accident also happened in 1994, when two UH-60 Blackhawk helicopters were destroyed by F-15 aircraft in Iraq because the IFF system's encryption system malfunctioned.[citation needed]
[edit] Terminology
The following terms used in engineering secure systems are explained below.
* A bigger OS, capable of providing a standard API like POSIX, can be built on a secure microkernel using small API servers running as normal programs. If one of these API servers has a bug, the kernel and the other servers are not affected: e.g. Hurd or Minix 3. * authentication techniques can be used to ensure that communication end-points are who they say they are. * Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications. * Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. The next sections discuss their use. * Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers. * Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified. * Firewalls can either be hardware devices or software programs. They provide some protection from online intrusion, but since they allow some applications (e.g. web browsers) to connect to the Internet, they don't protect against some unpatched vulnerabilities in these applications (e.g. lists of known unpatched holes from Secunia and SecurityFocus). * Mandatory access control can be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges. * Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system. * Thus simple microkernels can be written so that we can be sure they don't contain any bugs: eg EROS and Coyotos.
Some of the following items may belong to the computer insecurity article:
* Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer - such as through an interactive logon screen - or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems. * Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware). * application with known security flaws should not be run. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.
Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot.
* Backups are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals. o Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. Further, it is recommended that the alternate location be placed where the same disaster would not affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster that affected the primary site include having had a primary site in World Trade Center I and the recovery site in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (e.g. primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by Hurricane Katrina in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to prevent them from being stolen. * Encryption is used to protect the message from the eyes of others. It can be done in several ways by switching the characters around, replacing characters with others, and even removing characters from the message. These have to be used in combination to make the encryption secure enough, that is to say, sufficiently difficult to crack. Public key encryption is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of recipients, and only those recipients will be able to read that message. * Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic which can pass through them, based on a set of system administrator defined rules. * Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers. They can be used to catch crackers or fix vulnerabilities. * Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network. * Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker finds a computer they can try a port scan to detect and attack services on that computer. * Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers.
10 条评论:
Security & Law Enforcement Risks - Portable USB Devices
by Michael R. Anderson
It wasn't that many years ago when we were limited to cassette tapes and 160k floppy diskettes to store our important computer data. I remember when I bought my first computer....a trusty Tandy TRS 80 Model III. Back in the late 1970's, when that computer was in its prime, you stored your important files on cassette tapes. When the 160k floppy disk drives became available, I thought I was in "geek heaven". I remember wondering, "How will I ever fill a 160k floppy diskette with data?" Then when five megabyte hard disk drives became available in the early 1980's I was really blown away. We never imagined that hard disk drive sizes would eventually grow into the multiple-gigabyte storage range and floppy diskette storage would eventually exceed 2 megabytes. Portable flash memory devices were beyond the comprehension of most computer users back then. Boy have things changed!
Thanks to the popularity of digital photography and the advent of flash memory chips, computer storage devices today are compact and their storage capacities can exceed one gigabyte. In just two decades computer technology advances have made it possible for us to store the data capacity equivalent of two hundred 1980 vintage hard disk drives in tiny portable devices that will easily fit in the palm of a hand. Some of these devices pull double duty and have been configured into key chains, pens, watches and even a Swiss Army knife. These portable storage devices make it convenient for us to backup important computer files and to transfer data from one computer to another using USB technology. However, they also create significant security risks for government and corporate employers because proprietary and/or classified data can easily find its way onto these devices. Granted corporate and government policies many times forbid the use of these devices but are the policies followed? The insider threat is a real one concerning the unauthorized copying and storage of proprietary corporate data, e.g., client databases, bids, insider information and research and development data. Private sector government contractors typically have access to classified government data and information. Can you imagine the problems created when classified data migrates onto these portable flash memory devices? Needless to say, these new storage technologies have a high "wow" factor for those of us who live and breath computer technology. But, the same devices are causing CIO's and CSO's to rethink their internal security policies and the nature of their internal threats.
With computer security and the insider risk in mind, take a look of a sampling of flash memory- based USB devices that are currently available in the marketplace.
Examples of compact storage devices:
These graphics were created and donated for use by NTI's clients by Dr. Henry B. Wolfe, Associate Professor, Computer Security & Forensics, Information Science Department, School of Business, University of Otago, Dunedin, New Zealand. The illustrations came into being because of his research tied to potential business risks. It is important to note that much of the computer security and business liability research world-wide is being done by universities in their schools of business. This is because computer technology advances have come with a mixed blessing and many new risks and liabilities have been created for businesses in recent times.
Take for example these beautiful flash memory-based executive pens:
Food for thought.....
* Will one of these USB-based storage device pens be overlooked by law enforcement officers during the execution of a search warrant in a computer related investigation? Without proper training and an awareness of current technologies, I think this is likely.
* Will a probation and parole officer consider these as prohibited items for use by a convicted sex offender? Without training and awareness, I think not. They each have a storage capacity of approximately 256 million bytes and each can easily store 200 images of child pornography within the flash memory chips contained in the body of the pens.
* Will the security officers at the entrance and exit points of a classified government facility consider these to be banned items and a potential security risk? In most classified facilities they are well aware of computers, cell phones, digital cameras and weapons but it is my guess that these will pass right through security stations without detection.
* Would things have been different if this technology had existed years ago in the Hanssen spy case? You might recall the case of FBI Special Agent Robert Hanssen. He was alleged to be a spy for the Soviet Union and allegedly stole classified information from the FBI's computers and files for more than a decade. It is my guess that these devices would have been very helpful in the theft of U. S. government secrets in that case.
I have one of these flash memory pens and it is very effective in disguising the fact that it is a large capacity external flash memory storage device. The one that I have was made by PNY Technologies and it was purchased for under $60. It has a storage capacity of 128 megabytes but I could have paid a few more dollars for one with a storage capacity of 256 megabytes. The device that I purchased also functions as a beautiful executive pen which I use on a daily basis. Although this portable storage device has a high "wow" factor, it is also scary from a computer security risk standpoint. A disgruntled employee armed with one of these pens, could easily steal company data. The same could be true of a contractor, e.g., a janitor or repair man. When Windows 2000 and Windows XP-based systems are involved, the pen automatically interacts with the computer through a USB port and no installation of special software or drivers is required. These operating systems automatically recognize the device as a remote storage device and files can easily be copied from the system to the device under DOS or through the GUI interface. Granted passwords are required to log onto these systems but if a system is left running, it could easily be compromised. The same would be true if passwords are written near the computer keyboard which I know to be the case in some government and private sector office environments.
The same security threats could exist with the cool flash memory based watch that is illustrated below. Most people wear watches in the workplace and these devices could easily go unnoticed in most businesses and government agencies. Most people are not aware that this beautiful watch also doubles as a mass computer data storage device which is capable of storing over 256 megabytes of computer data. I don't have one of these yet, but my wife may find it on my Christmas list. (;^)
My intention in writing this article is to provide a wake-up call for law enforcement and security officials. Also, be aware that NTI covers these security risks and others in its popular 5 Day Computer Forensics Training Course. Unfortunately there is no magic remedy that will resolve the insider threat posed by these new computer storage technologies. Awareness, policies and policy enforcement are really the only answer when it comes to insider threats tied to portable USB devices. For more information about flash memory storage devices, please review the articles posted on NTI's web site at http://www.forensics-intl.com/art16.html and http://www.forensics-intl.com/art23.html .
ITSS addresses heightened computer security risk
BY BARBARA PALMER
Computer security used to be simple, says Phil Farrell, computer systems manager for the School of Earth Sciences. "When I started here in 1985, nobody thought about security -- other than to make sure that users didn't do anything to accidentally screw things up." There were a couple of hundred computers on the entire Internet then, the term "hacker" was just beginning to be used as a pejorative and worms were something that one worried about biting into in an apple.
Eighteen years and millions of computers later, computer security issues have grown exponentially more complex. On the campus alone, approximately 40,000 computers are connected to the Internet, said Ced Bennett, director of information security services for Information Technology Systems and Services (ITSS). In cyberspace, thousands of automated programs run day and night, constantly checking computers for a way to break in. An average computer is tested for vulnerabilities -- including software that's not been updated or easily guessed passwords -- once every six minutes, Bennett said. "It's like the Wild West. Everyone has a six-gun and is looking for someone to have a fight with."
Once a hacker finds a way into a computer, everything on it could be wiped out or it can be used to launch attacks on other computers and systems, Bennett said. By linking computers together, hackers aggregate enough processing power to overwhelm and cripple targeted systems.
"Worms" like the Sapphire code that affected 700,000 computers worldwide in late January exploit weaknesses in software and spread by automatically replicating themselves. While Sapphire shut down computer networks at the University of Pennsylvania and Ohio State University, ITSS personnel were able to fairly quickly isolate and fix affected networks, Bennett said. "When it reached computers, it didn't do any damage -- but it easily could have."
Stanford's network is particularly hard to secure because it's intentionally open, Bennett said. Unlike profit-making institutions, which close their networks, Stanford considers the open flow of information a fundamental part of its research and educational mission, he said. "It's part of our culture."
The recent switch from administrative systems that operate on mainframe computers to ones run on server-based machines also has heightened computer security risks, he said. "It's not that the mainframe was any safer, but it was more obscure," Bennett said. "It wasn't like having your systems on the Internet."
On the Internet, "the number of people who could come at you is not infinite, but it's huge," said Tina Darmohray, a computer security specialist who does security consulting and client outreach for ITSS. "That is the challenge for anything you put on the Internet."
As administrative systems have moved to a place where there is more risk of break-in, the security team has changed its focus from responding to security incidents to preventing them, Bennett said. "The realization [of the risk] became clearer a year ago."
In a pilot project started last year in the dormitories, ITSS began scanning every computer to look for missing or easily guessed passwords, like "user" or "beatcal" or "four-letter-word-cal," Bennett said. The test found 200 such "bad" passwords in the first 4,500 machines that were scanned, he said. "Five percent is a lot."
Since then, ITSS has begun a program to systematically scan campus computers for bad or nonexistent passwords and let users know when they find one. The good news -- "the amazing news" -- is that their efforts have made a difference in the number of break-ins, Darmohray said. "We've raised the consciousness just a little that computer security is necessary." Later this year, ITSS will launch a security awareness campaign designed to educate campus users about computer security (see sidebar).
In addition to outside risks to security, attention has to be paid to the damage that can come from people misusing systems inside the university, Bennett said. "It's known that most bad things that happen, happen inside a company," he said.
It's difficult to calculate the costs of computer break-ins to the university, he said. It can take days to reload systems on computers that have been broken into or to resurrect toppled systems, but those costs pale beside the cost to Stanford's reputation if it were to sustain a catastrophic security breach.
Before the 1906 earthquake, Stanford had achieved the status of a top 10 university, he said. After the earthquake hit, it took a decade for the university to regain its status. A serious security breach "could be equally catastrophic," he said. SR
Computer Security Ethics and Privacy
by: Vincent Q. Deguzman
Today, many people rely on computers to do homework, work, and create or store useful information. Therefore, it is important for the information on the computer to be stored and kept properly. It is also extremely important for people on computers to protect their computer from data loss, misuse, and abuse. For example, it is crucial for businesses to keep information they have secure so that hackers can’t access the information. Home users also need to take means to make sure that their credit card numbers are secure when they are participating in online transactions. A computer security risk is any action that could cause lost of information, software, data, processing incompatibilities, or cause damage to computer hardware, a lot of these are planned to do damage. An intentional breach in computer security is known as a computer crime which is slightly different from a cypercrime. A cybercrime is known as illegal acts based on the internet and is one of the FBI’s top priorities. There are several distinct categories for people that cause cybercrimes, and they are refereed as hacker, cracker, cyberterrorist, cyberextortionist, unethical employee, script kiddie and corporate spy. The term hacker was actually known as a good word but now it has a very negative view. A hacker is defined as someone who accesses a computer or computer network unlawfully. They often claim that they do this to find leaks in the security of a network. The term cracker has never been associated with something positive this refers to someone how intentionally access a computer or computer network for evil reasons. It’s basically an evil hacker. They access it with the intent of destroying, or stealing information. Both crackers and hackers are very advanced with network skills. A cyberterrorist is someone who uses a computer network or the internet to destroy computers for political reasons. It’s just like a regular terrorist attack because it requires highly skilled individuals, millions of dollars to implement, and years of planning. The term cyperextortionist is someone who uses emails as an offensive force. They would usually send a company a very threatening email stating that they will release some confidential information, exploit a security leak, or launch an attack that will harm a company’s network. They will request a paid amount to not proceed sort of like black mailing in a since. An unethical employee is an employee that illegally accesses their company’s network for numerous reasons. One could be the money they can get from selling top secret information, or some may be bitter and want revenge. A script kiddie is someone who is like a cracker because they may have the intentions of doing harm, but they usually lack the technical skills. They are usually silly teenagers that use prewritten hacking and cracking programs. A corporate spy has extremely high computer and network skills and is hired to break into a specific computer or computer network to steal or delete data and information. Shady companies hire these type people in a practice known as corporate espionage. They do this to gain an advantage over their competition an illegal practice. Business and home users must do their best to protect or safeguard their computers from security risks. The next part of this article will give some pointers to help protect your computer. However, one must remember that there is no one hundred percent guarantee way to protect your computer so becoming more knowledgeable about them is a must during these days. When you transfer information over a network it has a high security risk compared to information transmitted in a business network because the administrators usually take some extreme measures to help protect against security risks. Over the internet there is no powerful administrator which makes the risk a lot higher. If your not sure if your computer is vulnerable to a computer risk than you can always use some-type of online security service which is a website that checks your computer for email and internet vulnerabilities. The company will then give some pointers on how to correct these vulnerabilities. The Computer Emergency Response Team Coordination Center is a place that can do this. The typical network attacks that puts computers at risk includes viruses, worms, spoofing, Trojan horses, and denial of service attacks. Every unprotected computer is vulnerable to a computer virus which is a potentially harming computer program that infects a computer negatively and altering the way the computer operates without the user’s consent. Once the virus is in the computer it can spread throughout infecting other files and potentially damaging the operating system itself. It’s similar to a bacteria virus that infects humans because it gets into the body through small openings and can spread to other parts of the body and can cause some damage. The similarity is, the best way to avoid is preparation. A computer worm is a program that repeatedly copies itself and is very similar to a computer virus. However the difference is that a virus needs o attach itself to an executable file and become a part of it. A computer worm doesn’t need to do that I seems copies to itself and to other networks and eats up a lot of bandwidth. A Trojan Horse named after the famous Greek myth and is used to describe a program that secretly hides and actually looks like a legitimate program but is a fake. A certain action usually triggers the Trojan horse, and unlike viruses and worms they don’t replicate itself. Computer viruses, worms, and Trojan horses are all classifies as malicious-logic programs which are just programs that deliberately harms a computer. Although these are the common three there are many more variations and it would be almost impossible to list them. You know when a computer is infected by a virus, worm, or Trojan horse if one or more of these acts happen:
* Screen shots of weird messages or pictures appear.
* You have less available memory then you expected.
* Music or sounds plays randomly.
* Files get corrupted.
* Programs are files don’t work properly.
* Unknown files or programs randomly appear.
* System properties fluctuate.
Computer viruses, worms, and Trojan horses deliver their payload or instructions through four common ways. One, when an individual runs an infected program so if you download a lot of things you should always scan the files before executing, especially executable files. Second, is when an individual runs an infected program. Third, is when an individual bots a computer with an infected drive, so that’s why it’s important to not leave media files in your computer when you shut it down. Fourth is when it connects an unprotected computer to a network. Today, a very common way that people get a computer virus, worm, or Trojan horse is when they open up an infected file through an email attachment. There are literally thousands of computer malicious logic programs and new one comes out by the numbers so that’s why it’s important to keep up to date with new ones that come out each day. Many websites keep track of this. There is no known method for completely protecting a computer or computer network from computer viruses, worms, and Trojan horses, but people can take several precautions to significantly reduce their chances of being infected by one of those malicious programs. Whenever you start a computer you should have no removable media in he drives. This goes for CD, DVD, and floppy disks. When the computer starts up it tries to execute a bot sector on the drives and even if it’s unsuccessful any given various on the bot sector can infect the computer’s hard disk. If you must start the computer for a particular reason, such as the hard disk fails and you are trying to reformat the drive make sure that the disk is not infected.
[Ed. Note: For security, I use several programs, Zone Alarm, AVG, Pest Patrol, Spyware Sweeper, Spyware terminator.]
Computer security is a branch of technology known as information security as applied to computer(s). The objective of computer security can include protection of information from theft or corruption, or the preservation of availability, as defined in the security policy.
Computer security imposes requirements on computers that are different from most system requirements because they often take the form of constraints on what computers are not supposed to do. This makes computer security particularly challenging because it is hard enough just to make computer programs do everything they are designed to do correctly. Furthermore, negative requirements are deceptively complicated to satisfy and require exhaustive testing to verify, which is impractical for most computer programs. Computer security provides a technical strategy to convert negative requirements to positive enforceable rules. For this reason, computer security is often more technical and mathematical than some computer science fields.[clarification needed][citation needed]
Typical approaches to improving computer security (in approximate order of strength) can include the following:
* Physically limit access to computers to only those who will not compromise security.
* Hardware mechanisms that impose rules on computer programs, thus avoiding depending on computer programs for computer security.
* Operating system mechanisms that impose rules on programs to avoid trusting computer programs.
* Programming strategies to make computer programs dependable and resist subversion.
Contents
[hide]
* 1 Hardware mechanisms that protect computers and data
* 2 Secure operating systems
* 3 Security architecture
* 4 Security by design
o 4.1 Early history of security by design
* 5 Secure coding
* 6 Capabilities vs. ACLs
* 7 Applications
o 7.1 In aviation
+ 7.1.1 Notable system accidents
* 8 Terminology
* 9 Notes
* 10 References
* 11 See also
[edit] Hardware mechanisms that protect computers and data
Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such as dongles may be considered more secure due to the physical access required in order to be compromised.
While many software based security solutions encrypt the data to prevent data from being stolen, a malicious program may corrupt the data in order to make it unrecoverable or unusable. Hardware-based security solutions can prevent read and write access to data and hence offers very strong protection against tampering.
[edit] Secure operating systems
Main article: Secure operating systems
One use of the term computer security refers to technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such a Computer security policy is the Bell-La Padula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the memory management unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical.
Systems designed with such methodology represent the state of the art[clarification needed] of computer security although products using such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information, military secrets, and the data of international financial institutions. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security functionality and assurance level (such as EAL levels), and these are specified in a Protection Profile for requirements and a Security Target for product descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria.
In USA parlance, the term High Assurance usually suggests the system has the right security functions that are implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can protect less valuable information, such as income tax information. Secure operating systems designed to meet medium robustness levels of security functionality and assurance have seen wider use within both government and commercial markets. Medium robust systems may provide the same security functions as high assurance secure operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are used not only to protect the data stored on these systems but also to provide a high level of protection for network connections and routing services.
[edit] Security architecture
Main article: Security architecture
Security Architecture can be defined as the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes, among them confidentiality, integrity, availability, accountability and assurance."[1]. In simpler words, a security architecture is the plan that shows where security measures need to be placed. If the plan describes a specific solution then, prior to building such a plan, one would make a risk analysis. If the plan describes a generic high level design (reference architecture) then the plan should be based on a threat analysis.
[edit] Security by design
Main article: Security by design
The technologies of computer security are based on logic. There is no universal standard notion of what secure behavior is. "Security" is a concept that is unique to each situation. Security is extraneous to the function of a computer application, rather than ancillary to it, thus security necessarily imposes restrictions on the application's behavior.
There are several approaches to security in computing, sometimes a combination of approaches is valid:
1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
4. Trust no software but enforce a security policy with trustworthy mechanisms.
Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four.
There are myriad strategies and techniques used to design security systems. There are few, if any, effective strategies to enhance security after design.
One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.
The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.
[edit] Early history of security by design
The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has become widely known as a non-terminating process that fails to produce computer security. This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.
[edit] Secure coding
Main article: Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, et al.). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion.
In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion.
Recently another bad coding practice has come under scrutiny; dangling pointers. The first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable. [2]
In summary, 'secure coding' can provide significant payback in low security operating environments, and therefore worth the effort. Still there is no known way to provide a reliable degree of subversion resistance with any degree or combination of 'secure coding.'
[edit] Capabilities vs. ACLs
Main articles: Access control list and Capability (computers)
Within computer systems, the two fundamental means of enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems and commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language.
First the Plessey System 250 and then Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.
The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most security comes from operating systems where security is not an add-on, such as OS/400 from IBM. This almost never shows up in lists of vulnerabilities for good reason. Years may elapse between one problem needing remediation and the next.
A good example of a secure system is EROS. But see also the article on secure operating systems. TrustedBSD is an example of an open source project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.
[edit] Applications
Computer security is critical in almost any technology-driven industry which operates on computer systems. The issues of computer based systems and addressing their countless vulnerabilities are an integral part of maintaining an operational industry. [3]
[edit] In aviation
The aviation industry is especially important when analyzing computer security because the involved risks include expensive equipment and cargo, transportation infrastructure, and human life. Security can be compromised by hardware and software malpractice, human error, and faulty operating environments. Threats that exploit computer vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction, and human error. [4]
The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as data theft or loss, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life. Military systems that control munitions can pose an even greater risk.
A proper attack does not need to be very high tech or well funded for a power outage at an airport alone can cause repercussions worldwide. [5]. One of the easiest and, arguably, the most difficult to trace security vulnerabilities is achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having altered flight courses of commercial aircraft and caused panic and confusion in the past. Controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's sight controllers must rely on periodic radio communications with a third party.
Lightning, power fluctuations, surges, brown-outs, blown fuses, and various other power outages instantly disable all computer systems, since they are dependent on an electrical source. Other accidental and intentional faults have caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable communication and electrical power only jeopardizes computer safety.
[edit] Notable system accidents
In 1994, over a hundred intrusions were made by unidentified hackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horse viruses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user. [6] Now, a technique called Ethical hack testing is used to remediate these issues.
Electromagnetic interference is another threat to computer safety and in 1989, a United States Air Force F-16 jet accidentally dropped a 230 kg bomb in West Georgia after unspecified interference caused the jet's computers to release it. [7]
A similar telecommunications accident also happened in 1994, when two UH-60 Blackhawk helicopters were destroyed by F-15 aircraft in Iraq because the IFF system's encryption system malfunctioned.[citation needed]
[edit] Terminology
The following terms used in engineering secure systems are explained below.
* A bigger OS, capable of providing a standard API like POSIX, can be built on a secure microkernel using small API servers running as normal programs. If one of these API servers has a bug, the kernel and the other servers are not affected: e.g. Hurd or Minix 3.
* authentication techniques can be used to ensure that communication end-points are who they say they are.
* Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
* Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. The next sections discuss their use.
* Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
* Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
* Firewalls can either be hardware devices or software programs. They provide some protection from online intrusion, but since they allow some applications (e.g. web browsers) to connect to the Internet, they don't protect against some unpatched vulnerabilities in these applications (e.g. lists of known unpatched holes from Secunia and SecurityFocus).
* Mandatory access control can be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges.
* Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system.
* Thus simple microkernels can be written so that we can be sure they don't contain any bugs: eg EROS and Coyotos.
Some of the following items may belong to the computer insecurity article:
* Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer - such as through an interactive logon screen - or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems.
* Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware).
* application with known security flaws should not be run. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.
Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot.
* Backups are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals.
o Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. Further, it is recommended that the alternate location be placed where the same disaster would not affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster that affected the primary site include having had a primary site in World Trade Center I and the recovery site in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (e.g. primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by Hurricane Katrina in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to prevent them from being stolen.
* Encryption is used to protect the message from the eyes of others. It can be done in several ways by switching the characters around, replacing characters with others, and even removing characters from the message. These have to be used in combination to make the encryption secure enough, that is to say, sufficiently difficult to crack. Public key encryption is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of recipients, and only those recipients will be able to read that message.
* Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic which can pass through them, based on a set of system administrator defined rules.
* Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers. They can be used to catch crackers or fix vulnerabilities.
* Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
* Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker finds a computer they can try a port scan to detect and attack services on that computer.
* Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers.
Hope my information can help you.
Or you can search from Wikipedia, Google and yahoo.
wah..
sry i cant help u T.T
谢谢你,爱笑,你真好呀!帮了我一个大忙,你的恩惠,我不会忘记的!呵呵!谢啦!
呵呵,其实也没什么啦~
都是些从网上找到的资料复制上来的。
嗨~!第一次来哦~
爱笑你的资料看到我都眼花了
对不起
帮不打你
你几时要的
发表评论