Security Blog

Your source for information security news and views.

So much for the chain of trust

Posted by: Patrick Snyder

Tagged in: News , Google , digital certificate

We all know digital certificates are meant to keep us safe while browsing the web. They are installed on our systems from birth, require digital signatures to be altered, and establish a supposedly unbreakable chain of trust. But what happens when that chain of trust is in fact compromised? What happens when a digital certificate falls into the wrong hands?

Hackers have recently obtained Google’s digital SSL certificate from DigiNotar, a Dutch certificate authority. Proof has already been flaunted on pastebin.com of this valuable takeover. It is still unclear how the certificate was obtained. There may have been a possible breach on DigiNotar’s website allowing access to the certificate or there may have been a lack of oversight by DigiNotar. Either way this event presents a significant security risk to users.

This certificate allows the hackers a trusted reputation for each of Google’s many services including Gmail, Google search, and Google Apps. This would easily allow them to poison DNS addresses and launch a massive spam attack which could relay back to false sites, then use these sites to compromise users accounts through a man-in-the-middle attack.

According to security professionals, based on the information posted on Pastebin, the certificate is in fact valid. This leaves endless possibilities for the hackers to exploit the certificate. Also, since the certificate is valid, users will not be displayed with a warning message, even if they are on a malicious site posing as Google.

Google has been expected to quickly patch Google Chrome’s certificate’s and will most likely urge Microsoft, Mozilla, Apple, and others to follow in their footsteps for the safety of the internet. 


The recent 5.9 magnitude earthquake in Mineral, VA was a complete surprise to those within its reach. Although damages were minimal this still reminds us of the importance of disaster recovery and business continuity planning. So far reports only show minimal injuries, a safety shutdown of local nuclear plants, and some cell network disruption. These effects are minor as compared to other major disasters. The most important thing we must take from this event is that these things can happen anywhere and everyone must be prepared.

Your office may not be near a fault line, in tornado alley, or along hurricane path, but these natural events do deviate from their means from time to time. In a way there is no 100% safe place to be. It is always a good practice to plan for every disaster possible and not just those that are common for your area.

This also raises some questions regarding the placement of our disaster recovery providers. Chances are your disaster recovery provider has chosen a backup location that on a normal day is exposed to minimal risk of disaster. They probably claim this location has been chosen due to its low risk factor and generally safe environment. But as I just stated there is no end all be all safe haven for data and IT centers to set up shop. So what happens if your disaster recovery provider is knocked out by a natural disaster? Do you have a backup for your backup?

In another side of the story, the Tuesday quake may not have thrown any industries into disaster recovery mode but it did shed light on the aging infrastructure throughout cities along the East coast. Disaster recovery plans can help to rebuild and enable business continuity after a damaging event however, they do not generally take into account the fragility of the infrastructure currently in place. Many disaster recovery plans would be much less likely to be activated if the infrastructures they are set up for are solid and secure from the start.

With hurricane Irene bearing down on the East coast within the next week we can only hope the minor damage already done by the quake is not magnified by the hurricane. Be prepared, batten down the hatches, and have your disaster recovery and business continuity plans ready.


Compliance is never easy and cloud computing only adds to the challenge of keeping up with standards and regulations. Until now U.S. government agencies have found it difficult if not impossible to get their sensitive information onto the cloud despite federal programs aimed at doing just that. The issue has always been with compliance and security. The management of sensitive data has strict regulatory requirements that must be followed in order to protect information.

A few of those important regulatory requirements are location and access control. Sensitive data from U.S. agencies is required to be stored within US boundaries and only be accessible by users residing within the U.S. With most cloud services spanning across a few continents the challenge of keeping that data contained is nearly impossible.

Amazon Web Services hopes to defeat this challenge with their newly announced GovCloud offering.

A description from Amazon Web Services about GovCloud:

AWS GovCloud is an AWS Region designed to allow US government agencies and contractors to move more sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. Previously, government agencies with data subject to compliance regulations such as the International Trade and Arms Regulation (ITAR), which governs how organizations manage and store defense-related data, were unable to process and store data in the cloud that the federal government mandated be accessible only by U.S. persons. Because AWS GovCloud is physically and logically accessible by U.S. persons only, government agencies can now manage more heavily regulated data in AWS while remaining compliant with strict federal requirements.

The new service is also compliant with FISMA, SAS-70, ISO 27001, FIPS 140-2 compliant end points, PCI DSS Level 1, and HIPAA. This will most definitely make compliance auditing far less taunting and increase security of data in the cloud. Hopefully this new service will lead more federal agencies to begin joining in the cloud movement and finally begin to fulfill goals outlined in Vivek Kundr's Federal Cloud Computing Strategy.


It's 2a.m on a Monday, the workweek starts in 6 hours, and your cloud service provider just notified you that their services are down. What do you do?

This is the same question European consumers were asking themselves when Amazon's EC2 cloud services and Microsofts BPOS cloud services were taken out by a lightening strike in Dublin early this week.

Despite a proper disaster recovery and business continuity plan developed by these cloud providers, things do not always go as smoothly as they look on paper. Amazon has backup generators that should have powered up in perfect synchronization to cover the power loss however, the lightening strike was so substantial it knocked out the phase control system which synchronizes the power loads. Thus the backup generators had to be powered up and load managed manually resulting in a noticeable outage for customers.

This is something for cloud services consumers to keep in mind. You have been reminded time and time again during security training that proper cloud integration involves strict audits of your cloud service provider. These audits are sure to include disaster recovery and business continuity planning procedures. Having all this on paper is only one half of the equation for effective system resilience and reliability, the implementation of those procedures under pressure is the true test of recovery performance.

This brings us to what many IT security professionals see as the most important aspect of disaster planning, having a backup. This can include file backups, virtual image backups, and even fully operational system backups (what many of us recognize as "hot sites").  Most cloud service providers will offer you extensive features to include many of these protection services. Although bundling them all into the same provider may be more convenient it can also lead to further disaster in times of peril.

As we have seen by the abundance of cloud outages so far this year, bad things do happen to cloud services. The cloud will go down. This brings an increased importance to third party services to keep you running while your main cloud service provider gets back on their feet again. Just as it isn't smart to "put all of your eggs in one basket," it probably isn't a good idea to place all of your computing power and resources in the hands of one provider.


Forget about LulzSec and Anonymous. Those political hacktivist groups are only amateur script kiddies compared to hackers recently revealed by McAfee. The newly discovered groups five year long attack, which struck at least 72 identified organizations, seems to have originated out of China, although no official location has been determined.

Dubbed Operation Shady RAT, which stands for remote administration tools, employs spear phishing techniques which mimic legitimate email messages (just as many other phishing attacks do), then once users open attachments their systems become infected with malware allowing them to be controlled by a command-and-control server hosted by the hackers. Unlike other attacks we have seen, this hacking group doesn't seem to be out for laughs or a quick payout. It's data mining they are after, and lots of it.

The longevity of their attacks has led to the compromise of petabytes worth of data thus far. The damage and loss of proprietary information is far more valuable than anyone would have predicted, and until the attackers are shut down, it is only expected to get worse.

This attack brings to light a concept we have been throwing at IT security professionals for quite some time now. Anyone who has attended Ken Kousky's Strategy to Reality seminars has most definitely heard about Advanced Persistent Threats (APTs). This was the same attack approach used in the SCADA attacks on Iraq's nuclear facilities and in Operation Aurora against Google and a dozen or more organizations. For those that need a brush up on APT attacks think of them as interactive, polymorphic attacks with the ability of their controllers to evolve and adapt to any security system. You build a wall, they knock it down, you dig a moat, they swim across it. APT attacks represent an new revolution of unstoppable cyber attacks.

The only way to stop an APT attack is to cut it off at its driving source, the C&C server. McAfee is working with a variety of US government agencies to shut down the C&C server however the attackers 5 year head start along with jurisdictional issues is sure to make this quite the challenging task.

Another issue is many organizations failure to report or admit a compromise, thus making these attacks even more difficult to follow. Security professionals must keep in mind that despite your organizations reputation or pride, you have a duty to disclose attacks to the proper authority. These attacks cannot be ignored and cannot be fought alone.

Microsoft has even started a program offering a $250,000 incentive to anyone who contributes outstanding solutions to these attacks in defense of the future of computing technology.

If your wondering if your organization could be a target then just ask yourself one question. Does my information hold any value whatsoever? I'm guessing that for 95% of organizations this answer is yes.


Topics