Whether our family member gets their credit card number stolen, or our friend gets their Facebook account hijacked, or we have our web site blacklisted for SPAM, we are all affected by phishing attacks — some worse than others.
Computer incidents happen. They just do. Regardless of the expansive and proactive nature of a particular team, the Computer Network Defense (CND) job will include Incident Response.
Why? Because in part, CND is reactive. A properly running CND team will include a subgroup of Attack and Exploitation members who will actively look for vulnerabilities in your network, but that subgroup is dwarfed by the number of active attackers in the world.
So what should a CND team do? The team should prepare for incident handling and response. As it turns out, when it comes to incident handling and response, prior planning provides utmost performance.
A brief history
In the beginning was ARPA. And the Internet was with ARPA. And the Internet was ARPA. The Advanced Research Projects Agency (ARPA, later known as DARPA) network was the precursor of what we now know as the Internet.
In 1988, Robert Morris made international history… by mistake. A young Cornell student at the time, Morris crafted what became known as the Morris Worm. The worm was intended to gauge the size of the then current internet through a sequence of weak passwords and services available on most networked devices at the time. But Morris poorly coded his worm. The mistake was that the worm would reinfect the host computer as well as spread to other computers, thereby overwhelming the host computer with processes. When a network engineer or systems administrator rebooted the machine to regain access, the nearby computers would quickly reinfect the machine. Recovery was not a simple task, and the Internet came to a halt.
At the time, DARPA and the Defense Department were positioning the have a guaranteed delivery, always available information network. The Morris Worm helped them realize the vulnerability of the net, and their response was to create the Computer Emergency Response Team (now known as CERT[tm]) hosted under the Software Engineering Institute (SEI) at Carnegie Mellon University. The charter for CERT was created to be a coordination center for computer network operations defenders in the US and around the world.
The NIST Incident Guide
NIST’s Computer Security Incident Handling Guide is an excellent source of how to organize and design a Computer Security Incident Response Capability. Realize, it will take some time to digest the entire document. You’ll have to forget some ideas you’ve likely held on to, and learn new techniques that have been proven in the art of incident response.
But why would you want to rewicker your incident handling policies, and plans, and procedures? This is a costly endeavor, no? Well, yes, it is. But it is going to help your organization prepare for incident response, will help in the process of incident response and recovery, and may even help in preventing an incident in the first place.
If your management is resistant to reviewing the policies, plans, and procedures in place, you might want to help them reconsider their position. If you happen to work in an industry or at a company who is responsible to external validation, or maintaining information that requires response to incidents (read this: just about everyone, including those who handle SOX, PHI, PII, PCI, and nearly any other data), you might want to make sure your policies, plans, and procedures follow NIST, even if not strictly required. When you are breached (and it is a when, not an if), your adherence to NIST or other standard is likely to go a very long way in reducing your fines.
Reviewing the NIST guide
The NIST Computer Security Incident Handling Guide is very well thought out and presented. The following sections take abstracted direct quotes from the NIST guide.
Chapter 1: Introduction
This document has been created for computer security incident response teams (CSIRTs), system and network administrators, security staff, technical support staff, chief information security officers (CISOs), chief information officers (CIOs), computer security program managers, and others who are responsible for preparing for, or responding to, security incidents.
Chapter 2: Organizing a Computer Security Incident Response Capability
Organizing an effective computer security incident response capability (CSIRC) involves several major decisions and actions. One of the first considerations should be to create an organization-specific definition of the term “incident” so that the scope of the term is clear. The organization should decide what services the incident response team should provide, consider which team structures and models can provide those services, and select and implement one or more incident response teams. Incident response plan, policy, and procedure creation is an important part of establishing a team, so that incident response is performed effectively, efficiently, and consistently, and so that the team is empowered to do what needs to be done. The plan, policies, and procedures should reflect the team’s interactions with other teams within the organization as well as with outside parties, such as law enforcement, the media, and other incident response organizations. This section provides not only guidelines that should be helpful to organizations that are establishing incident response capabilities, but also advice on maintaining and enhancing existing capabilities.
Chapter 3: Handling an Incident
The incident response process has several phases. The initial phase involves establishing and training an incident response team, and acquiring the necessary tools and resources. During preparation, the organization also attempts to limit the number of incidents that will occur by selecting and implementing a set of controls based on the results of risk assessments. However, residual risk will inevitably persist after controls are implemented. Detection of security breaches is thus necessary to alert the organization whenever incidents occur. In keeping with the severity of the incident, the organization can mitigate the impact of the incident by containing it and ultimately recovering from it. During this phase, activity often cycles back to detection and analysis—for example, to see if additional hosts are infected by malware while eradicating a malware incident. After the incident is adequately handled, the organization issues a report that details the cause and cost of the incident and the steps the organization should take to prevent future incidents. This section describes the major phases of the incident response process—preparation, detection and analysis, containment, eradication and recovery, and post-incident activity—in detail. Figure 3-1 illustrates the incident response life cycle.
Chapter 4: Coordination and Information Sharing
The nature of contemporary threats and attacks makes it more important than ever for organizations to work together during incident response. Organizations should ensure that they effectively coordinate portions of their incident response activities with appropriate partners. The most important aspect of incident response coordination is information sharing, where different organizations share threat, attack, and vulnerability information with each other so that each organization’s knowledge benefits the other. Incident information sharing is frequently mutually beneficial because the same threats and attacks often affect multiple organizations simultaneously.
As mentioned in Section 2, coordinating and sharing information with partner organizations can strengthen the organization’s ability to effectively respond to IT incidents. For example, if an organization identifies some behavior on its network that seems suspicious and sends information about the event to a set of trusted partners, someone else in that network may have already seen similar behavior and be able to respond with additional details about the suspicious activity, including signatures, other indicators to look for, or suggested remediation actions. Collaboration with the trusted partner can enable an organization to respond to the incident more quickly and efficiently than an organization operating in isolation.
This increase in efficiency for standard incident response techniques is not the only incentive for crossorganization coordination and information sharing. Another incentive for information sharing is the ability to respond to incidents using techniques that may not be available to a single organization, especially if that organization is small to medium size. For example, a small organization that identifies a particularly complex instance of malware on its network may not have the in-house resources to fully analyze the malware and determine its effect on the system. In this case, the organization may be able to leverage a trusted information sharing network to effectively outsource the analysis of this malware to third party resources that have the adequate technical capabilities to perform the malware analysis.
This section of the document highlights coordination and information sharing. Section 4.1 presents an overview of incident response coordination and focuses on the need for cross-organization coordination to supplement organization incident response processes. Section 4.2 discusses techniques for information sharing across organizations, and Section 4.3 examines how to restrict what information is shared or not shared with other organizations.
Appendix A: Incident Handling Scenarios
Incident handling scenarios provide an inexpensive and effective way to build incident response skills and identify potential issues with incident response processes. The incident response team or team members are presented with a scenario and a list of related questions. The team then discusses each question and determines the most likely answer. The goal is to determine what the team would really do and to compare that with policies, procedures, and generally recommended practices to identify discrepancies or deficiencies. For example, the answer to one question may indicate that the response would be delayed because the team lacks a piece of software or because another team does not provide off-hours support.
The questions listed below are applicable to almost any scenario. Each question is followed by a reference to the related section(s) of the document. After the questions are scenarios, each of which is followed by additional incident-specific questions. Organizations are strongly encouraged to adapt these questions and scenarios for use in their own incident response exercises.
Appendix B: Incident-Related Data Elements
Organizations should identify a standard set of incident-related data elements to be collected for each incident. This effort will not only facilitate more effective and consistent incident handling, but also assist the organization in meeting applicable incident reporting requirements. The organization should designate a set of basic elements (e.g., incident reporter’s name, phone number, and location) to be collected when the incident is reported and an additional set of elements to be collected by the incident handlers during their response. The two sets of elements would be the basis for the incident reporting database, previously discussed in Section 3.2.5. The lists below provide suggestions of what information to collect for incidents and are not intended to be comprehensive. Each organization should create its own list of elements based on several factors, including its incident response team model and structure and its definition of the term “incident.”
History of the Internet, http://en.wikipedia.org/wiki/History_of_the_Internet#Three_terminals_and_an_ARPA
2014 was yet another banner year in Computer Security. The industry met with the Heartbleed SSL vulnerability, Point of Sale equipment attacks against Target and Home Depot, and the Shellshock vulnerability in a piece of software that has been around for more than twenty years.
If you happen to not remember any of those, well, you must be happily sailing the islands. Good for you!
But for the rest of us in technology, and particularly for those in computer security, we’ve had quite a year.
One of the outgrowths of these vulnerabilities being exploited has been that it seems “everyone” has heard the term “zero day”. But what is a zero day?
Before we begin
Before exploring anything else here, let’s set the record. Regardless of a formal definition of zero day, the responsibility of the defense team is to prevent loss of confidentiality, loss of availability, and loss of integrity of data and systems. The responsibility (if you will) of the attack team is to do just the opposite. In some ways, defining zero day is going to feel like a lesson in academics. In some ways, it is academic. That said, let’s move on.
Exploits vs Vulnerabilities
As we define 0day, let’s explore a couple of supporting ideas. Let’s start with Exploits and Vulnerabilities.
In security, a vulnerability is a weakness that allows a threat to compromise the integrity of a resource. NIST SP 800-30, “Risk Management Guide for Information Technology Systems”, defines vulnerability as flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy.
That said, an exploit is an attack on a resource that takes advantage of a vulnerability. Think of it this way. A vulnerability is an attack surface. But it takes a special kind of vulnerability to be exploitable. There is no exploit unless a vulnerability exists, but not all vulnerabilities are exploitable.
Let’s create a non electronic based example to help understand the ideas. Let’s say you keep paper copies of all credit card transactions in a file cabinet. You are vulnerable to having all of this PCI data compromised and stolen by an adversary. The vulnerability is that all PCI data is in a file cabinet, so the exploit would be that someone walks in and takes your file cabinet. What do you do to control the vulnerabilities? You’ve placed locks on the cabinet and your front door, and you’ve hired an armed security guard and guard dog to police your premises. Because of these safeguards, the original vulnerability is moot. The new vulnerability is several steps deep, a defense in depth. Now the adversary has to disable the dog, disarm the guard, pick the lock on the front door, and pick the lock on the cabinet. You still have vulnerabilities, but the combined effort of all those vulnerabilities must be exploitable at the same time in order for an exploit to occur.
The elusive Zero Day
Now that we understand Vulnerability chains and Exploitability, let’s come to an understanding of what a zero day is, and what a zero day is not. If you’ve seen literature about a security vulnerability, that vulnerability is likely not a zero day (I’ll get to that “likely” word in a moment). To be comprehensive in this discussion, the systems may remain vulnerable to attack after a vulnerability is patched, but the vulnerability is not a result of the zero day, the vulnerability is a result of an unpatched system.
“Wait, what?”, you might be asking. “How is a zero day any different than an unpatched system vulnerability?” Okay, let’s try this. A zero day is a vulnerability in which the protectors have had no days to create a patch for the system. If the protectors are aware of the vulnerability, then it is no longer a zero day.
That said, a vulnerability that has been presented to the protectors but in which a patch has not been created or has not been deployed still results in a vulnerability, but those vulnerabilities are no longer zero day. But really, zero day is even a little more elusive than this. Let’s be honest. Being hit by an exploit will always feel like a zero day, because you likely did not take the attack vector seriously.
Timeline of vulnerabilities
Protecting systems often relies on patches. So what is a reasonable timeframe between presenting a vulnerability to the vendor and a patch? Some reports identify that it takes vendors more than ten months to develop a patch. Google has put the brakes on this long forecasting though. Google’s Project Zero gives the vendor 90 days between the time of vulnerability presentation to the vendor and the time the vulnerability is made known to the world.
Exploiting the SDLC
Exploiting systems truly relies on exploiting the Systems Development Lifecycle (or SDLC). The SDLC starts with the first thoughts of a system, and continues through retirement or disposal of the system. Wikipedia has a great article on SDLC, and we’ll visit and organize a few steps that are particularly important when discussing exploitation:
(development) The development team creates software
(initial deployment) The software is distributed to end user teams
(installation) The software is installed by end user teams
(feedback) The development team is made aware of requested upgrades and security issues.
(patch development) The development team creates patches
(patch deployment) Patches are distributed to end users
(patch installation) End user teams install the patches
(repeat) Repeat to Feedback loop
(end of life) At some point the product will reach End of Life and no longer be maintained.
Ripe times for vulnerability discovery exist at the following points, and the vulnerability discovery teams will hand off those vulnerabilities to exploit developers:
Between Initial Deployment and Installation. Hackers will get the software and try to do daring things to it, sometimes even before the first end user team has installed it. Any vulnerabilities discovered here are clearly zero day vulnerabilities.
Between Feedback and Patch Development. Hackers will look at public blogs and websites where bug track issues and core dumps are reported, to determine if any of the logs identify vulnerabilities. Bugs that translate into vulnerabilities are not really zero days. Instead, these vulnerabilities are known vulnerabilities that are not yet addressed. But this definition could be a matter of semantics, and to argue the issue is not worthwhile. From the point of view of an attacker, they are vulnerabilities. From the point of view of the victim, they are vulnerabilities.
Between Patch Deployment and Patch Installation. Hackers will look at patch deployments — especially security patches — to determine what vulnerability existed in the prior version. This point in time is one of the most prolific in the days of a vulnerability researcher. These are not zero days. These are known vulnerabilities, and the systems remain vulnerable only because the end user hasn’t been responsible and deployed the patches. The attack surface is a result of unpatched systems, solely the responsibility of the end user.
For an example, Microsoft’s Patch Tuesday invariably results in Exploit Wednesday. Why? Because it takes awhile for all users to update their systems. Oftentimes business users will refrain from patching immediately because of incompatibility with other products.
At and after End of Life. Hackers will take advantage of end of life product in that zero days last forever once the development team has left the update cycle. These vulnerabilities are sometimes referred to as zero days forever.
Case in point, when Microsoft announced the end of updates for Windows XP, they also described how attackers will lay waste to users who remain on XP.
Zero day… What it means to me
So here’s the short of it all, and let’s revisit our previous definition. A pure zero day is that moment in time between when an attacker knows about the vulnerability and the defense team knows about the vulnerability. The exploit team is using it, and the defense team doesn’t know about it.
Computer Network Attack
To better defend your network, it is a good idea to understand how the adversary is going to attack your network. From the perspective of the Computer Network Attack & Exploitation (CNA/CNE) teams, the job is to find vulnerabilities and build exploit paths. How is this done? CNA teams will:
Become aware of anomalies through publicly available crash dumps, bug reports, and forums where users of any particular piece of software discuss issues. If a system crashes or produces otherwise unexpected results, there is something wrong — and that something may turn out to be a real vulnerability, and in turn that vulnerability may turn out to be exploitable.
Reverse engineer patch code and compare it to the unpatched versions, especially anything identified by the vendor as “security patch”. Realize if you find a vulnerability, you are in a race to attack the unpatched systems in the wild before the end user patches those systems.
Do what you can to create anomalies. Look at the touch points on the system, be that a network, a keyboard, or some other input device. Use tools such as Metasploit and fuzzers to force the system to do things it wasn’t originally designed to do.
Be realistic. For every million well crafted test cases, be happy with a thousand anomalies. With a thousand anomalies, be happy with a couple of repeatable vulnerabilities.
Computer Network Defense
If you are on the Computer Network Defense (CND) team, your job is to protect the network from known and unknown (0day) attack. How? Keep abreast of the product user community blogs to see what other people are reporting, and keep in touch with your own users to determine if they witness anomalies on the platform. What should you do?
Expect an anomaly is a vulnerability. There may not be an exploit path, but an anomaly is where every vulnerability is birthed.
Do what you can to isolate systems in general, and certainly any oddly acting systems. Network isolation is a great place to start.
Patch early, and patch often. Realize that when a patch becomes available, the CNA & CNE teams are reversing those patches to discover vulnerabilities and explore exploitation paths.
Be prepared with a patch plan. If a patch breaks one of your existing applications, be prepared to isolate the system instead of leaving an unpatched system in your universe.
For particularly difficult deployments where existing applications are known to not work with the most updated patches, use Virtual Machines to isolate those applications.
Remember, all of this is a race against time. Eventually (and yes, it may be years), every vulnerability will become publicly available and known, and once known the vulnerability will likely be eradicated through a patch or the exploit path will be nullified through isolation.
And as always, regardless of what side of the fence you are on, let’s be careful out there.
NIST SP 800-30, “Risk Management Guide for Information Technology Systems”
Microsoft Patch Tuesday, https://en.wikipedia.org/wiki/Patch_Tuesday
Defines Zero Day Vulnerability, “A zero day vulnerability refers to a hole in software that is unknown to the vendor”, http://www.pctools.com/security-news/zero-day-vulnerability/
Zero Day, “A zero day exploit is when the exploit for the vulnerability is created before, or on the same day as the vulnerability is learned about by the vendor”, http://netsecurity.about.com/od/newsandeditorial1/a/aazeroday.htm
Zero Day Vulnerability, “A zero-day vulnerability is previously unknown vulnerability in a software”, http://www.thewindowsclub.com/what-is-vulnerability-in-computer-security
Computer Security. Kind of scary, actually. With the likes of Target going down to hackers in late 2013, and a large attack on Home Depot in 2014, what can the rest of us do? If Home Depot can be compromised, how can I protect myself?
The bad news — you are a target. Why though? Well, let’s consider:
Do you have any financial data on your computer? You are a target.
Does your company operate a health care agency with HIPAA/HITECH protected data? You are a target.
Do you have a point of sale system where you perform credit card transactions? You are a target.
Are you attached to the Internet? You are a target. What? That is crazy sounding. Why am I a target? Because a hacker can use your computer as a relay or in a Distributed Denial of Service attack.
I know at this point you are likely thinking, oh great, thanks for making my day. But remember, we are trying to make your computers safer. Before we get into that though, let’s take a look at how malware gets on your computer in the first place.
How malware infection happens
You may think, hey, the only way a stitch of malware can get on my system is through the network. A firewall is sufficient to protect against those blasted attacks.
Unfortunately, not all malware infects systems the same way. Certainly, network attacks are one attack vector, but there are others.
There are email attack vectors, mp3 attack vectors, html attacks, mpeg attacks, apk attacks, over privilege attacks, Excel attacks, Word attacks, PDF attacks, and in fact the list never ends. An attack is possible anytime there is an interface to a computer. Sure an mp3 attack may come through a network or USB, but it isn’t a network attack. It is an attack on the software that is rendering the mp3. Exploring attack surfaces is well beyond the purpose of this paper, and will not be fully discovered in this paper.
One thing to note though. You might think hey, I don’t really care if someone exploits my mpeg player. That is a risk I’m willing to take! What are they going to get? A movie? The laugh’s on them.
Well… not exactly. The way system exploitation works is, exploit a low hanging fruit and get a shell on that system. Once an attacker has a root shell? Game over. He owns you. Even worse, he may own your network, depending on perimeter defenses that are in place. Think: defense in depth.
Alright already, we’ve covered enough. You may be thinking, this is way too much to pick up. You are right, it is! The short question is, what can you do to make your computer more safe? Let’s explore a few ways to help protect you from an attack.
Update your operating system software
The first thing you should do is to make sure you are using a modern operating system if at all possible. Sure, sometimes this isn’t possible — for example, some programs, especially embedded programs, are still operating on XP. If that is the case for you, you’ll have to make other concessions to safeguard your systems, your networks, and your data.
The first thing you may be thinking is, why in the world should I update my operating system? I paid for a version, it is working fine, so why should I update? Because hackers know that there is a delay between the time a patch comes out and the time it is fully adopted in the community. What happens when a patch comes out, especially a security patch, is that hackers are going to reverse engineer those updates to determine how an existing installation can be compromised. And compromise they will.
Again, if at all possible, upgrade your operating system to a modern x64 bit solution and keep that operating system patched. Are you using an outdated version of Windows and don’t wish to pay for an operating system? Then use a free operating system such as Ubuntu or one of the other Linux platforms. If that is not possible, then realize you are providing a fluid and rich attack surface and do what you can to protect perimeter systems.
Update your application software
Are you still using a x16 or x32 bit application? Do what you can to upgrade that application.
In the same way as outdated operating system software present security vulnerabilities, outdated user applications present security vulnerabilities in a very bad way. Each time an application is updated, hackers are very likely to review the updates to identify vulnerabilities in the existing installed user base.
Do you use an outdated version of Firefox? Or an outdated Adobe reader? My suggestion is: Don’t. But how about if our company forces you to use an outdated version of one of these applications? Yes, that can be an issue. You can only do so much especially if these decisions are above your pay grade. If you are forced to use outdated software, realize that those are reasonable attack vectors. Being aware is the first step to security.
But what about paid applications, you might ask? You paid nearly $5000 for your AutoCAD solution and more than a thousand for Adobe, is paying for an updated version really necessary? The answer is yes. You happen to be using a coveted piece of software. If you spent thousands for AutoCAD, it is likely that you have drawings and blueprints that are worth thousands more. Someone could use those drawings, especially if they can freely exfiltrate them from your computer.
How about layered applications like Internet Information Services, or IIS, used to serve web pages to the world? Well, you picked up on an easy target! IIS is a common attack vector, in part because it is easy to thumbprint the version that is being used on a network. Once an attacker identifies that an old version of IIS is being used, the attacker only needs to find a known vulnerability with that particular version of IIS to compromise the server.
Keeping your application software updated will go far in protecting your systems. Will it cost money? Yes, it likely will cost. I am a big proponent for open source software and the Free Software Foundation, so I’m not supporting the idea of having to spend money on new software. If you can find an equivalent open source software package that can do an equally good job for you, I’d suggest migrating to that open source software. Otherwise, yes, you’ll have to pay for that update.
If an application cannot be updated, do what you can to find a different and more recent application to use in its place.
Use a two way firewall
This might not at first sound reasonable. Why would I need a two way firewall? Because if a Trojan or other rogue executable finds its way on your computer, a bidirectional firewall will be able to alert you that the software is trying to communicate.
A great free solution is ZoneAlarm Free Firewall.
Use a virus protector
A lot of people are going to discount this part of the solution. Why? Because virus protectors provide a false sense of security. Virus protectors only protect against “known” viruses.
This is true, virus protectors do often provide a false sense of security. That said, virus protectors do provide protection against known viruses, so why not use one?
There are several free solutions, one of which is Microsoft Security Essentials.
Download only from known good sites
This is a really important artifact. Download only from known good sites.
For example, are you looking for an HP printer driver? Then go to the HP web site for the download. Do what you can to avoid “third party” driver sites.
Are you looking for a game or a program? Download from downloads.com / cnet.com, or from another known good source. There are web sites that are devoted to providing you excellent software — with associated trojan or other form of malware attached.
Are you looking for a free Hollywood movie or free APK sideload of the latest Android software through The Pirate Bay? Then be aware that the free download may also have a free Trojan attached. How will you know whether that illegal download is malware? You likely won’t know, even if you run it through the Cuckoo Sandbox automated malware analysis software.
Wait a second, behavior modification? I’m not looking for a psychologist! I don’t want to be Pavlov’s Dog! Well, that is not exactly what I mean by behavior modification.
If you are downloading something that you are not sure about, be careful about downloading it to your primary computer, especially if you use that computer for financial transactions. Set up a second computer where you can run any questionable programs, and where if those programs perform unexpected actions, your financial records will not be compromised.
You know those sweet popups that promise the first thousand who click on the banner will win a free iPad? Yeah, you aren’t going to get a free iPad. What you will get is infected. Don’t click that ad. Sadly, that the ad even popped up may be very bad news, you may already be infected.
Another great safeguard is to run periodic full scans of your system. Run MSE full scans, but also run other scans such as the free Trend Micro Housecall.
Use reasonable passwords
It might be better said as: Don’t use unreasonable passwords.
What does this warning mean anyway? One of the ways a hacker attempts to gain access to a system is through password cracking. Password cracking is a method to gain access to a system by way of basically “guessing” the password. A trained hacker will use one of the many password cracking software suites.
Is it reasonable to use abc123 or 1234 for a password? Probably not. Is it reasonable to use a single dictionary word? Probably not. Once a hacker has identified a username these types of passwords are very quickly guessed.
So what are more reasonable passwords? Throw in a few upper case letters and maybe symbols. For example, AbC123* is going to be a much less likely guess compared to abc123.
The four word solution!
So what is the solution to keep me and my data safe from attackers? The answer is: There Is No Answer. There are things you can do to make yourself more protected, and there are things to avoid that would make you less protected. Some of them have been covered in this paper.
The best advice available is: Be aware. Your data and your systems are costly, and compromises to your systems can be even more costly.
If you need personal advice on how to protect your data and your systems, feel free to contact me.
As always, let’s be careful out there!
Update your operating system
Update your software
Use a two way firewall
Use a Virus Protector
Download only from known good sites
Change your behavior
Avoid unreasonable passwords
HHS reference document for HIPAA/HITECH protected information, http://www.hhs.gov/news/press/2014pres/05/20140507b.html