uditgaurav / litmuschaos

This repository is for testing purpose

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

litmuschaos

This repository is for testing purpose

Latest Pipeline Status
pipeline status

90%

container kill

94%

Test name is container kill

Chapter 1 - Introduction

What is Cyber Security? Cyber security refers generally to the ability to control access to networked systems and the information they contain.

▪ Where cyber security controls are effective is known as cyberspace and is considered a reliable, resilient, and trustworthy digital infrastructure.

▪ Where cyber security controls are absent, incomplete, or poorly designed, cyberspace is considered the wild west of the digital age.

Whether a system is a physical facility or a collection of cyberspace components, the role of a security professional assigned to that system is to plan for potential attack and prepare for its consequences.

The goals of cyber security- prevent, detect, respond ▪ The means to achieve cyber security- people, process, technology ▪ The mechanisms by which cyber security goals are achieved- confidentiality, integrity, and availability. ▪ Prevent, detect, respond addresses goals common to both physical and cyber security. ▪ People, process, technology addresses methods common to both technology management in general and to cyber security management as a specialized field. ▪ Confidentiality, integrity, and availability addresses the security objectives that are specific to information

Confidentiality refers to a system’s capability to limit dissemination of information to authorized use. ▪ Integrity refers to ability to maintain the authenticity, accuracy, and provenance of recorded and reported information. ▪ Availability refers to the timely delivery of functional capability.

What Is Cyber Security Policy? The tension between demand for cyber functionality and requirements for security is addressed through cyber security policy. ▪ The word “policy” has been used to refer to laws and regulations concerning information distribution, private enterprise objectives for information protection, computer operations methods for controlling technology, and configuration variables in electronic devices (Gallaher, Link et al. 2008). the links to and from the “governance bodies” node illustrate that cyber security policy is adopted by governing bodies as a method of achieving security goals. The figure is purposely generic as governing bodies often exist outside of the organizations that they govern. Domains of Cyber Security Policy Where security is a priority for an organization, it is common to see cyber security policies issued by multiple internal departments with over-lapping constituencies, who then sometimes detect policy incompatibility issues in trying to follow them all simultaneously.

  1. Laws and Regulations Nation-state cyber security policy is currently considered to be a subset of national security policy. Even if nation-state cyber security policy was considered to be on the same plane as foreign policy or economic policy, these policies do not have the same force as law. ▪ Policies are established and articulated through reports and speeches, through talking points and negotiations. ▪ Policy is used to guide judgment on what laws and regulations to consider. ▪ It does not refer to the laws and regulations themselves For example, China has clearly established a policy that cyberspace ▪ activities critical to nation-state operations shall be controlled (Bishop 2010). ▪ This policy states clearly that the Internet shall serve the interests of the economy and the state. ▪ The policy has led to laws and regulations that allow the Chinese government to segregate, monitor, and control telecommunications facilities as well as block access to Internet sites they identify as contrary to their interests. ▪ In the United States, by contrast, most laws and regulations that impact cyber security were not developed specifically to address issues of cyberspace,but have emerged as relevant to cyber security in the context of policy enforcement. The policy is often economic in nature. For example, ▪ any financial institution that is regulated by the Office of the Comptrollerof the Currency has been subject to security audits and assessments of their Internet-facing infrastructure. A 2009 U.S. Cyber Security Policy Review actually redefined the word policy: “Cybersecurity policy includes strategy, policy, and standards regarding the security of and operations in cyberspace, and encompasses the full range of: This is the full range of issues to be considered when developing a cyber security policy: ▪ threat reduction ▪ Vulnerability reduction ▪ Deterrence ▪ international engagement ▪ incident response ▪ Resiliency ▪ recovery policies and activities including computer network ▪ operations, ▪ information assurance, ▪ law enforcement, ▪ diplomacy, ▪ military, ▪ and intelligence missions as they relate to the security and stability of the ▪ global information and communications infrastructure”

  2. Enterprise Policy Private sector organizations are generally not as constrained as governments in turning senior management policies into actionable rules. In a corporate environment, it is typical that policies are expected to be followed upon threat of sanction, up to and including employment termination.

  3. Technology Operations In an effort to assist clients in complying with legal and regulatory information security requirements, the legal, accounting, and consulting professions have adopted standards for due diligence with respect to information security, and recommended that clients model processes around them. These were sometimes proprietary to the consulting firm, but were often based on published standards such as the National Institute of Standards and Technology (NIST)’s Recommended Security Controls for Federal Information Systems and their private sector counter-parts.

  4. Technology Configuration Because many technology operations standards are implemented using specialized security software and devices, technology operators often colloquially refer to the standard-specified technical configuration of these devices as “security policy.” These specifications have over the years been implemented by vendors and service providers, who devised technical configurations of computing devices that would allow system administrators to claim compliance with various standards.

Strategy versus Policy ▪ Cyber security policy articulates the strategy for cyber security goal achievement and provides its constituents with direction for the appropriate use of cyber security measures. ▪ Without a clear conceptual view of cyber security influences, it would be difficult to devise cyber security strategy and corresponding policy. ▪ Key to cyber security policy formulation is: a. to recognize that security control decisions are made regardless of whether there is a formal policy in place, b. to understand that policy is the appropriate tool to guide multiple independently made security decisions, and c. to absorb as much information as possible about how security decisions are influenced in the course of devising security strategy.

Chapter-2 Cyber Security Evolution

PRODUCTIVITY Evolution started in 1960s with Mainframe systems. The first type of computer that was affordable enough for businesses to see a return on investment from electronic data processing systems. Computers were secured with guards and gates.

Physical security procedures were devised to ensure that only people authorized to work on computers had physical access to them. Large systems to work with.

In late 1960s, punch cards were used and job entry received from multiple office locations connected via cables to the main computer. Computer security staff responsible for tracing these cables under raised floors, and through wall spaces to check the authorized person at work.

Acute aware of security risk Confidentiality, integrity, and availability triad was not yet industry Standard.

Apart from military and intelligence, confidentiality was not the major security requirement. Catastrophic data integrity error from human entries Software engineering organizations were the first to raise the security alarm because computers were starting to control systems where faulty operation could put lives at risk. In 1970, • Punch card replaced with electronic I/O devices. • Cables extended for internetworking. • Security personnels were not safeguarding. • screens were limited by business logic coded into the software

The Concept of Cryptography was introduced. Data units encrypted using private keys and decrypted back to original message at the destination.

In recognition of the growing confidentiality requirements, but without any good way to meet them, the U.S. National Bureau of Standards (now the National Institute of Standards and Technology [NIST]) launched an effort to achieve consensus on a national encryption standard.

In 1974, the U.S. Computer Security Act (Privacy Act) was the first stake in the ground designed to establish control over information propagation. Later, word processing software were implemented for work automation. Time sharing systems were introduced where clients were charged for system use. • Companies began to specialize by industry, developing complicated software such as payroll tax calculations and commercial lease calculations. • User identification through username and password.

From 1970 to 1980, minicomputers became to attract human for personal use. In late 1970, Apple introduced home computers. In 1981, IBM introduced Personal Computer (PC). The local area network (LAN) cables were protected much like the computer terminals’ connection to the mainframe A new type of network equipment called a “hub” allowed the communication, and hubs had to be kept in a secure area. Mandatory access controls (MAC) allowed management to label computer objects (programs and files) and specify the subjects (users) who could access them.

Discretionary schemes (DAC) that allowed each user to specify who else could access their files. Timesharing-type password technology was employed on the LAN, LAN user names were primarily supported to facilitate directory services rather than to prevent determined attacks.

Cyberspace presented a new avenue of inquiry for law enforcement investigating traditional crimes.

Criminals were caught boasting of their crimes on the social networking sites of the day, which were electronic bulletin board services reached by modems over phone lines.

Law enforcement partnered with technology vendors to produce software that would recover files that criminals had attempted to delete from computers.

INTERNET Directory services were available that allowed businesses to connect, and be connected to, the research and military restricted advanced research projects agency (ARPA) network, or ARPANET, whose use case and name were relaxed as it evolved into the public Internet. Technology-savvy companies quickly registered their domain names so that they could own their own corner of cyberspace. Researchers were concerned with the potential for system abuse due to the exponential expansion of the numbers of connected computers.

Robert Morris at AT&T Bell Laboratories: Researcher Early computer Pioneer. In 1988, Robert Tappan Morris (Son of Robert Morris) devised the first Internet worm. The “Morris Worm” accessed computers used as email servers, exploited vulnerabilities to identify all the computers that were known to each email server, and then contacted all of those computers and attempted the same exploits. Internet communication virtually stopped within few hours , computing resources were so overwhelmed by the worm’s activities that they had no processing cycles or network bandwidth left for transaction processing, leaving business processes disrupted. But ARPANET was saved because they had installed FIREWALL (A method of inspecting each individual information packet within a stream of network traffic.) The firewall was designed to allow network access to only those packets whose source and destination matched those on a previously authorized list. Network Address and port Number used for communication.

The Bell Labs firewall was hastily employed to safeguard AT&T’s email servers, and the impact to AT&T from the Morris worm was minimal. The primary cyber security implementation strategy of choice since then has been to deploy firewalls.

Morris Worm had profound effect on internet community,ARPA was officially managing the network,it responded by allocating a team Computer Emergency Response Team (CERT) to provide technical assistance to those who suffered from cyber security problems.

Same types of vulnerabilities like Morris worm in Internet-facing email servers existed in systems that presented modem interfaces to the Public.

Most hackers interested for stealing some system time to play games without information to the vulnerable user’s system.

Those that stoletime to play on computer were known as Joyriders. By 1992, few hackers have profit motives.

In 1986, Cliff Stoll (Lawrence Berkeley National Laboratory) , detected billing error in the range of 75 cents of computer time that was not associated with any of his users.

Stoll ended up tracking the missing cents of computing time to an Eastern European espionage ring.

Published Detective Article called “The Cuckoo’s Egg”. (locks the system via modems

The Cuckoo’s Egg set off a large-scale effort among technology managers to identify and lock down access to computers via modems No-firewall technology developed for Phone but various combinations of phone-system technology met the requirements..

One such combination is caller ID and dial-back. Caller-ID checks the authorization of caller from database.Anyone with customer premise phone equipment can present any number to a receiving phone via the caller-ID protocol, basically impersonating an authorized home phone number, or spoofing an authorized origination. So it is not secure simply to accept the call based on the fact that caller ID presents a known phone number. After verifying that the number is valid, the dialed computer hangs up and dials-back the authorized number to make sure it was not spoofed.

Dial-back calls back the caller on checking the validity of called number. Safe at dial-back modems, it became easy to surf in home network and other regions.

Published Detective Article called “The Cuckoo’s Egg”. (locks the system via modems The Cuckoo’s Egg set off a large-scale effort among technology managers to identify and lock down access to computers via modems No-firewall technology developed for Phone but various combinations of phone-system technology met the requirements..

One such combination is caller ID and dial-back. Caller-ID checks the authorization of caller from database.Anyone with customer premise phone equipment can present any number to a receiving phone via the caller-ID protocol, basically impersonating an authorized home phone number, or spoofing an authorized origination. So it is not secure simply to accept the call based on the fact that caller ID presents a known phone number. After verifying that the number is valid, the dialed computer hangs up and dials-back the authorized number to make sure it was not spoofed. Dial-back calls back the caller on checking the validity of called number. Safe at dial-back modems, it became easy to surf in home network and other regions.

The devices represent the logical location of the firewalls and telecommunication line connections to other firms.

The telecommunication lines are portrayed as logically segmented spaces where lines to business partners terminate on the internal network. (private lines)

All these network periphery controls did not prevent the hackers and joyriders from disrupting computer operations with viruses.

Viruses were distributed on floppy disks and also implanted on websites for advertisement in Corporate and Government Internet Users.

Action on Virus:

Digital Signatures were created for each viruses by identifying each file it altered and the types of logs it left behind.

Anti-virus software were created and digital signatures of each virus were recorded.

Software security flaws and bugs were detected by Cyber Forensic. As the signature that identified one virus was not tied to the software flaw but to the files deposited by the virus itself, a virus writer could slightly modify his or her code to take advantage of the same software vulnerability and evade detection by antivirus software.

The antivirus software vendors’ cyber forensics specialists were also usually able to identify the software security bugs or flaws in operating systems or other software that had been exploited by the viruses Customer feedback on the initial software release determined which new features would be added and which bugs and flaws would be repaired. These fixes were known as “patches” to software.

• The word “patch” is derived from the physical term meaning a localized repair. Its origin in the context of computers referred to a cable plugged into a wall of vacuum tubes that altered the course of electronic processing in an analog computer by physically changing the path of code execution.

Patches are small files that must frequently be installed on complex software in order to prevent an adversary from exploiting vulnerable code and thereby causing damage to systems or information.

Vulnerabilities in the e-Commerce sector called as “the port 80 problem”. Port 80 is the port on a firewall that has to be open in order for external users to access web services.

Starting from port 80 on a server facing the Internet, a web server program was designed to accept user commands instructing it to display content, but it would also allow commands instructing it to accept and execute programs provided by a user.

The immediate result of the port 80 problem analysis was that firewalls were installed not just at the network periphery but in a virtual circle around any machine that faced the Internet.

A Demilitarized Zone (DMZ) network architecture became the new security standard. (Bell Lab) A DMZ was an area of the network that allowed Internet access to a well-defined set of specific services. In a DMZ, all computer operating software accessible from theInternet was “hardened” to ensure that no other services could be accessed from those explicitly allowed, or that were considered“sacrificial” systems that were purposely not well secured, but closely monitored to see if attackers were targeting the enterprise(Ramachandran 2002). Cyber DMZ is surrounded by checkpoints on all sides. (includesFirewall).

The design of a DMZ requires that Internet traffic be filtered packets can only access the servers that have been purposely deployed for public use and are fortified against expected attacks.

It became standard procedure that the path to the internal network was opened only with the express approval of a security architect, who was responsible for testing the security controls on all DMZ and internally accessible software. (ISO) The huge growth of e-Commerce was envied by the competing sites.

They attempted to stop the flow of e-commerce to competitors byintentionally consuming all the available bandwidth allowed through the competitor firewall to the competitor websites. Because these attacks prevented other Internet users from using the web services of the stricken competitor, they were designated “denial of service” attacks. “Distributed Denial of Service” or “DDOS (Geographically dispersed area) Security Researchers: Passwords (Not enough for protection) Hand-held devices Bio-metric Credit Card sized Handheld device generating tokens

The term “blacklist” became to be known in computer security literature as the list of websites that were known to propagate malicious software (“malware”).

A list of the universal resource locations (URLs) corresponding to Internet sites called a “blacklist” were made as first use of technology.

A web proxy server blocks a user from accessing sites on the blacklist. The proxy is enforced because browser traffic is not allowed outbound through the network periphery by the firewalls unless it comes from the proxy server, so users have to traverse the proxy service in order to Browse.

The immense growth in releasing of various forms of malicious attacks, it became hectic to keep on updating the existing mechanism. (Enterprise Security Management).

Firewalls were placed on the Internal side of the telecommunications lines that privately connected firms from their third party service providers. Only expected services were allowed through, and only to the internal users or servers that required the connectivity to operate.

The Vs with the lines through them indicate that antivirus software was installed on the types of machines identified underneath them. The Ps stand for patches that were, and still are, frequently required on the associated computers. The shade of gray used to identify security technology is the same throughout the diagram. The dashed line encircles the equipment that is typically found in a DMZ.

e-COMMERCE

e-Commerce established connection between the market and customer in devices. That is why sites were fraught with risk of fraud and threats to confidentiality because of the number of telecommunications devices that suddenly gained unfettered access to customer information, including credit card numbers. In 1995, a new encrypted communications protocol called Secure Socket Layer (SSL) In 1999, the protocol was enhanced by committee and codified under the name Transport Layer Security (TLS) (Rescorla and Dierks 1999).• • The TLS protocol requires web servers to have long identification strings, called certificates. The software allowed them to create a root certificate for their company, and the root certification was used to generate serve certificates for each company web server.

For critical applications that facilitated high asset value transactions, certificates could also be generated for each customer, which the SSL protocol referred to as a client.

The SSL protocol thus made use of certificates to identify client to server and server to client. Once mutually identified, both sides would use data from the certificates to generate a single new key they both would use for encrypted communication.

This allows each web session to look different from the point of view of an observer on the network, even if the same information, such as the same credentials, are transmitted. Identity management systems were developed to ease the administration and integrate customer login information and online activity with existing customer relationship management processes. Security strategies were devised to control and monitor code development, testing, and production environments. Source code control and change detection systems became standard cyber security equipment. Remote access still required two-factor authentication, and this was judged an adequate way to maintain access control, particularly when combined with other safeguards, such as a control that prevents a user from being able to have two simultaneous sessions. To maintain confidentiality of customer information, the entire remote access session would have to be encrypted, Virtual private network (VPN) technology was introduced. Innovative security companies sought to relieve workstations from their virus-checking duties by providing network-level intrusion detection systems (IDSs).

Signature-based antivirus technology (viruses look was noticed as they traveled across the network). Technical configurations such as firewall rule sets, security patch specifications, wireless encryption settings, and password complexity rules were colloquially referred to as “security policy.”

Security policy servers were established to keep track of which configuration variables were supposed to be on which device. If a device failed or was misconfigured, it would take too much work to recreate the policies.

Security policy servers economically and effectively allowed the technology configurations to be centrally monitored and managed. Inspite of so many security policies and strategies, Cyber crimes were reported.

Security Information Mangement (SIM) servers were deployed to store and query massive numbers of activity logs. SIM Servers validate the activity logs and compliances the policy

Chapter 3 Cyber security Objectives

Cyber Security Metrics Measurement is the process of mapping from the empirical world to the formal, relational world. Combinations of measures corresponding to an elusive attribute are considered derived measures Derived measures are subject to interpretation in the context of an abstract model of the thing to be measured.

Metrics is a generic term that refers to the set of measures that characterize a given field. Cyber security is not the direct object of measurement, nor a well-enough-understood attribute of a system to easily define derived measures or metrics.

So those engaged in cyber security metrics are measuring other things and drawing conclusions about security goal achievement from them.

This challenge has spawned a field of study called security metrics (Jaquith and Geer 2005). Metrics in Physical security traditionally was the ability of the system to achieve its goal to meet the design basis Threat(DBT)

The DBT describes the characteristics of the most powerful and innovative adversary that it is realistic to expect to protect against.

Adopting a DBT approach to security implies that strength of the security measure required by a system is measured by the technical specification on how it is likely to be attacked. If DBT is force of 20 people with access to explosives of a given type then the strength of physical barrier to unauthorized access must withstand the ton of force these 20 people could bring into system contact. To achieve this: Barrier protection materials are specified Threat Delay and Response systems are designed Validation tests are conducted accordingly

• In cyber security, the terms perpetrator, threat, exploit, and vulnerability are terms of the trade, their meaning is distinct and interrelated. • a perpetrator is an individual or entity • A threat is a potential action that may or may not be committed by a perpetrator. • An exploit refers to the technical details that comprise an attack. • A vulnerability is a system characteristic that allows an exploit to succeed. • the mainstay of the systemigram of Figure 3.1 is read as, “Security thwarts perpetrators who enact threats that exploit system vulnerabilities to cause damage that adversely impacts Value”

• As each type of system vulnerability reached the stage of security community awareness, a corresponding set of security countermeasure technologies came to the market, and became part of an ever-increasing number of best practice recommendations. • Countermeasures were applied to vulnerable system components, and threats to systems were assumed to be covered by the aggregated result of implementing all of them. • Figure 3.2 illustrates this approach by adding these concepts and the relationships between them to the systemigram of Figure 3.1.

Figure 3.3 illustrates the difference between this traditional approach • to security architecture and a more holistic, system-level approach. • It depicts vulnerable attributes of a system as a subset of system attributes, and perpetrator targets as a subset of the system’s vulnerable attributes. • traditionally, security engineering has attacked this problem with security specific components, derogatorily referred to as “bolt-ons”. • these are often labelled “compensating controls,” which is a technical term in the audit that refers to management controls that are devised because the system itself has no controls that would minimize damage were the vulnerability to be exploited. • Bolt-ons are by definition work-arounds that are not part of the system itself, such as the firewalls the lower part of Figure 3.3 illustrates the contrast between abolt-on approach to solving security problems and a security design approach that instead is expected to alter system-level attributes to eliminate or reduce vulnerability.

Security Management Goals • Security programs that are motivated by regulatory compliance are not specifically designed to achieve organizational goals for security, but instead are designed to demonstrate compliance with security management standards. • the standards themselves have become de facto security metrics taxonomies that cross organizational borders. • Practitioners are often advised to organize their metrics around the requirements in security management standards against which they may expect to be audited (Her-rmann 2007; Jaquith 2007). • There is even an international standard for using the security management standards to create security metrics (ISO/IEC2009b). • The disadvantage to this type of approach to security management is that details of standards compliance are seen as isolated technology configurations to be mapped to a pre-established scorecard. • None of these standards comprise a generally accepted method of directly measuring security in Terms of achievement in thwarting threats (King 2010).

• There is even an international standard for using the security management standards to create security metrics (ISO/IEC2009b). • The disadvantage to this type of approach to security management is that details of standards compliance are seen as isolated technology configurations to be mapped to a pre-established scorecard. • None of these standards comprise a generally accepted method of directly measuring security in Terms of achievement in thwarting threats (King 2010).

• For example, individuals who have changed jobs sometimes measure the security at the old and new firms in terms based on the degree of difficulty for them to access important data and information, both locally and remotely. • For example, they may identify the number of passwords they have to use from their desktops at home to access customer data in the office, and decide that the firm that makes them use more authentication factors is more secure. • Figure 3.5 shows this type of layered-defense depiction of system security. such layering is often called defense in depth.

Figure 3.5 provides a layered perspective on a typical network • It has multiple security “layers,” as described in the central lower part of the diagram. • At the top of the diagram, the “Remote Access” user is illustrated as being required to authenticate a workstation,which may or may not be controlled by the enterprise. • From the network access point, the remote user can directly authenticate to any of the other layers in the internal network. • This is why remote access typically requires a higher level of security, because once on the internal network, there are a variety of choices for platform access.

Web Application Path • In the case of the web application, the existence of the layers does not actually constitute defence in depth. • This is because such Internet accessible applications are usually accessible with just one log-in. • A user then can access the application without authenticating to the network because the firewall allows anyone on the Internet to have direct access to the login screen of the application on the web server • Once within the application, the data authentication layer is not presented to the user; the application automatically connects to it on behalf of the user.

• These conveniences are depicted in the figure as bridges through the layers that the remote user would have to authenticate to pass, but the application user does not. Hence, to apply the term defence in depth to this case would be a misnomer.

• Figure 3.6, it is recommended that security metrics be raised to consider business-level requirements for security. • However, there is an issue with this approach. It is that there is currently no convergence around a single organizational management structure for security, so there can be no corresponding authoritative business-level security metrics taxonomy. • Instead, there has been a great deal of consensus around standards for security process (ISO/IEC 2005; ISO/IEC 2005; ISACA 2007; ISF 2007; Ross, Katzke et al. 2007)

• the NIST report also suggested a classification of security metrics into leading, concurrent, and lagging indicators of security effectiveness. • An example of a leading indicator is a positive assessment of the security of a system that is about to be deployed. • Concurrent indicators are technical target metrics that show whether security was currently configured correctly or not. • Lagging indicators would be discovery of past security incidents due to inadequate security requirements definition, or failures in maintaining specified configurations. • If the goal is to know the current state of system security, concurrent indicators would make better metrics. • Recommendations for security metrics often suggest a hierarchical metrics structure where business process security metrics are at the top. • The next level includes support process metrics like information security management, business risk management, and technology products and services (Savola 2007). • Each leaf-level measure • is combined with its peers to provide an aggregation measure that determines the metric above them in the hierarchy.

the average percentage target goals achieved in each subset for the four business areas would be called the “Product Security” and “Service Security” metrics, respectively. • The average of those two would be the “technology Security” metric. • This method of measurement is still verification that the design for security was implemented (or not) as planned, rather than validation that the top-level security goals are met via the process of decomposition and measures of leaf performance.

�:wq!

About

This repository is for testing purpose


Languages

Language:Go 76.6%Language:Python 7.4%Language:Shell 6.7%Language:Dockerfile 5.2%Language:JavaScript 2.6%Language:Makefile 1.6%