Recently, I created podcast with the folks at RackN on cloud security, GDPR and a whole list of other cybersecurity related topics. Rob Hirschfeld and Stephen Spector are part of the leadership at RackN and experts at data center automation.
From the RackN website:
RackN allows Enterprises to quickly transform their current physical data centers from basic workflows to cloud-like integrated processes. We turned decades of infrastructure experience into data center provisioning software so simple it only takes 5 minutes to install and provides a progressive path to full autonomy. Our critical insight was to deliver automation in a layered way that allows operations teams to quickly adopt the platform into their current processes and incrementally add autonomous and self-service features.
You can find the podcast here:
https://www.rackn.com/2017/12/11/podcast-chris-steffen-security-cloud-edge-coming-gdpr/?utm_content=social-ka5na&utm_medium=social&utm_source=SocialMedia&utm_campaign=SocialPilot
Monday, December 11, 2017
Friday, November 10, 2017
IoT Security: Understanding my Connected Thermostat
Today, I wanted to share a post from a guest author. My friend Jason Garbis (@jasongarbis) created this piece about IoT and your home thermostat. It is a great read and the research is really interesting! I know many folks that are adopting IoT devices in their homes, and likely will not put the level of effort that Jason went through to understand their security. Good news in this case - he did it for you!
I’m a technical guy, and I like understanding how things work. Because I’m employed at a network security company, I’ve been doing a lot of reading and writing about network security, the (in)security of connected devices, and attacks such as the Mirai botnet.
Which brings me to my connected home Thermostat, a Trane model which uses the Nexia home automation platform. I wanted to understand the network model for this device. I can use the Nexia app on my phone to control the thermostat from anywhere, but how does this work? Does my device have an open connection to a service in the cloud? Or is there (shudder) an inbound connection to it?
I’ve been able to answer these questions with some research, but this was harder than it should be, and there’s little hope for less-technical people to be able to figure these kinds of things out for their home automation systems.
So let’s get started!
One note: For privacy purposes, I have redacted my home IP address throughout this document.
The thermostat is on my home wireless network, with an IP address assigned to it from my wireless router: 192.168.1.7.
I performed a quick port scan with two different tools – nmap running on a local machine, and fing running on my phone – and they both showed no open ports on the device. This is a good first result from a security perspective! (Note that I’ve also configured my wireless router to not have any open ports, or to allow any incoming network connections, so even if the thermostat had an open port, it would only have been accessible on the wireless network, and not from the Internet. And yes, UPnP is also disabled on the router!).
Let’s take a look at the device itself – it’s a Trane XL824, which shows up on the network as a Murata Manufacturing device. This device displays a local weather forecast, and is controllable from a smartphone app.
It’s clear that the thermostat is making an outbound connection to a server, and obtaining data such as the weather forecast, and commands such as temperature setting changes from the phone app over this connection [note that while some systems might use a peer-to-peer connection over the local wifi, that’s not how this system operates]. In particular, I’m very interested in understanding the command model, and the security around this. How are changes to my thermostat settings performed? What’s the data flow from the phone app to my thermostat?
My standard-issue home wireless router offers very little in terms of actual network monitoring features. If you dig through the painfully slow admin UI, it does offer a crude security log with a shockingly small capacity of 16KB. This corresponds to only a few seconds of traffic, apparently, before we see:
Fortuitously, in this brief snippet of log I discovered an outbound connection from the thermostat’s private IP address (192.168.1.7) out to a remote system, at IP address 23.194.182.156 on port 80.
This IP address is operated by Akamai – it’s not surprising that the Nexia folks, with probably hundreds of thousands of thermostats running 24x7, would make use of a CDN to farm out content to nearby nodes. But, I still have a few questions about this.
Why is it using port 80 rather than 443? A simple port scan shows that the target IP address has both ports 80 and 443 open. Should I be concerned about this traffic being unencrypted?
Trying this IP address in my browser results in a web server error –
So, let’s try HTTPS:
Aha! Look at the domain associated with the certificate. This is a site providing the weather forecast data to the thermostat. Presumably it makes regular outbound calls to the Akamai-hosted CDN site to obtain this data.
I happened to catch an outbound call to this service in that brief log snippet. I’m guessing that it was preceded by a DNS lookup, which returned this nearby Akamai IP address based on my geolocation. Obtaining weather data over HTTP rather than HTTPS may seem fairly benign, but does introduce a potential vulnerability. A man-in-the-middle or DNS hijacking attack could pretty easily serve up bogus or malformed weather data, and this malformed data could be used to perform an attack and obtain a foothold on the thermostat, for example via a buffer overflow. So I need to give Nexia a small demerit for this. Ideally they’d use HTTPS to preserve the integrity of the data, and also perform a certificate revocation check.
Back to the task at hand, it’s clear that I need more comprehensive logging of my home network. Despite my wireless router’s limitations, it does support the ability to log to a remote system:
I have an old laptop at home on which I’ve installed Linux, so I got this fired up and configured to listen for SYSLOG data coming from the router.
Ok…lots of data to parse. Let’s filter it a bit…
And (reformatted for clarity) a basic pattern emerges:
Looks like the thermostat is calling out to the Weather service every 5 minutes. This pattern is quite regular:
These are not only off-cycle from the other connections, they’re also the only ones going out from the thermostat to port 443.
Based on a reverse DNS lookup, these servers are running in AWS – and map to an EC2 instance and an S3 bucket. These are presumably the control mechanisms for the thermostat.
Let’s see what these services are: 52.1.236.158 has ports 80 and 443 open, so loading this in a browser leads us to…
the Nexia site (properly redirecting to HTTPS).
The other two destinations for port 443 correspond to S3 buckets, which don’t offer a Web interface without authentication, and without permission I’m not going to poke around them in any case.
Just for kicks, let’s do a DNS lookup on the mynexia.com domain:
And there it is, 52.1.236.158. Hosted in AWS and assigned to mynexia.com. This is clearly the connection we’ve been looking for! Let’s think about the system behavior for a moment – I can use the Nexia iPhone app to adjust my thermostat, and these changes take place essentially immediately – within ~5 seconds based on my handful of tests.
This implies near-real time communications over an existing network connection, not something that’s polling-based. And because we’ve established that there are no inbound connections to the thermostat, this outbound connection to the Nexia system must be long-lived.
Let’s take a look at the log files to see what else we can discover. We can see several connections outbound to port 443 on the Nexia server.
And looking at a DNS log I set up – I used BIND and configured my router to use my Linux laptop as its DNS server -- we can see the mynexia.com domain resolution request:
Some of these connections are only open for a few seconds – for example the one outbound on port 44318 is opened at 15:57:09, and closed at 15:57:12. But the connection on port 50216 – opened at 15:27:41 – remains open for a long period (beyond the horizon of when I turned off the logging server that day).
This is exactly what we’d expect from the observed behavior! A long-lived outbound connection from the thermostat to the Nexia server, used to communicate commands in near real-time.
Is this secure? Let’s asses – it’s using HTTPS for the connection, which is clearly a good foundation. I can’t tell whether it’s doing any certificate validation on the mynexia.com domain. Doing so would require deploying a firewall and performing HTTPS inspection, which is beyond the scope of this article.
So let’s summarize what we learned here, and assess its security. The thermostat is only making outbound connections, and doesn’t require either an open port or (the horror of) UPnP. It’s making an HTTPS connection to its command & control system, hosted at a recognizable domain. These are all sound security approaches.
My only criticism is that it’s making an unsecured call to a Weather.com server over HTTP. This is a small but real vulnerability, since it’s subject to an MITM attack that could exploit a buffer overflow of some sort. I’m not terribly worried about upstream attackers at the ISP, but someone could create a rogue wireless access point and capture the outbound calls to the weather forecast server. Or in theory hijack my DNS, redirect the thermostat to a bogus weather forecast server, and deliver malformed data. Again – these are real but unlikely attacks.
Overall, I’m satisfied with the security of this device. I learned a lot doing this research, and I hope that you’ve found this writeup useful. Let me know what you think – I’m reachable on Twitter @JasonGarbis
Thanks!
I’m a technical guy, and I like understanding how things work. Because I’m employed at a network security company, I’ve been doing a lot of reading and writing about network security, the (in)security of connected devices, and attacks such as the Mirai botnet.
Which brings me to my connected home Thermostat, a Trane model which uses the Nexia home automation platform. I wanted to understand the network model for this device. I can use the Nexia app on my phone to control the thermostat from anywhere, but how does this work? Does my device have an open connection to a service in the cloud? Or is there (shudder) an inbound connection to it?
I’ve been able to answer these questions with some research, but this was harder than it should be, and there’s little hope for less-technical people to be able to figure these kinds of things out for their home automation systems.
So let’s get started!
One note: For privacy purposes, I have redacted my home IP address throughout this document.
The thermostat is on my home wireless network, with an IP address assigned to it from my wireless router: 192.168.1.7.
I performed a quick port scan with two different tools – nmap running on a local machine, and fing running on my phone – and they both showed no open ports on the device. This is a good first result from a security perspective! (Note that I’ve also configured my wireless router to not have any open ports, or to allow any incoming network connections, so even if the thermostat had an open port, it would only have been accessible on the wireless network, and not from the Internet. And yes, UPnP is also disabled on the router!).
Let’s take a look at the device itself – it’s a Trane XL824, which shows up on the network as a Murata Manufacturing device. This device displays a local weather forecast, and is controllable from a smartphone app.
It’s clear that the thermostat is making an outbound connection to a server, and obtaining data such as the weather forecast, and commands such as temperature setting changes from the phone app over this connection [note that while some systems might use a peer-to-peer connection over the local wifi, that’s not how this system operates]. In particular, I’m very interested in understanding the command model, and the security around this. How are changes to my thermostat settings performed? What’s the data flow from the phone app to my thermostat?
My standard-issue home wireless router offers very little in terms of actual network monitoring features. If you dig through the painfully slow admin UI, it does offer a crude security log with a shockingly small capacity of 16KB. This corresponds to only a few seconds of traffic, apparently, before we see:
Fortuitously, in this brief snippet of log I discovered an outbound connection from the thermostat’s private IP address (192.168.1.7) out to a remote system, at IP address 23.194.182.156 on port 80.
This IP address is operated by Akamai – it’s not surprising that the Nexia folks, with probably hundreds of thousands of thermostats running 24x7, would make use of a CDN to farm out content to nearby nodes. But, I still have a few questions about this.
Why is it using port 80 rather than 443? A simple port scan shows that the target IP address has both ports 80 and 443 open. Should I be concerned about this traffic being unencrypted?
Trying this IP address in my browser results in a web server error –
So, let’s try HTTPS:
Aha! Look at the domain associated with the certificate. This is a site providing the weather forecast data to the thermostat. Presumably it makes regular outbound calls to the Akamai-hosted CDN site to obtain this data.
I happened to catch an outbound call to this service in that brief log snippet. I’m guessing that it was preceded by a DNS lookup, which returned this nearby Akamai IP address based on my geolocation. Obtaining weather data over HTTP rather than HTTPS may seem fairly benign, but does introduce a potential vulnerability. A man-in-the-middle or DNS hijacking attack could pretty easily serve up bogus or malformed weather data, and this malformed data could be used to perform an attack and obtain a foothold on the thermostat, for example via a buffer overflow. So I need to give Nexia a small demerit for this. Ideally they’d use HTTPS to preserve the integrity of the data, and also perform a certificate revocation check.
Back to the task at hand, it’s clear that I need more comprehensive logging of my home network. Despite my wireless router’s limitations, it does support the ability to log to a remote system:
I have an old laptop at home on which I’ve installed Linux, so I got this fired up and configured to listen for SYSLOG data coming from the router.
Ok…lots of data to parse. Let’s filter it a bit…
And (reformatted for clarity) a basic pattern emerges:
Looks like the thermostat is calling out to the Weather service every 5 minutes. This pattern is quite regular:
- Generally the outbound connections are going to either 23.194.182.156 or 23.192.142.167. These are both Akamai IP addresses, so I’m guessing that DNS is returning these as part of a load-balanced set.
- These calls are all preceded by a DNS call using UDP port 53. The log shows these going first to the router (192.168.1.1), which then sends them along to the external DNS server.
These are not only off-cycle from the other connections, they’re also the only ones going out from the thermostat to port 443.
Based on a reverse DNS lookup, these servers are running in AWS – and map to an EC2 instance and an S3 bucket. These are presumably the control mechanisms for the thermostat.
Let’s see what these services are: 52.1.236.158 has ports 80 and 443 open, so loading this in a browser leads us to…
the Nexia site (properly redirecting to HTTPS).
The other two destinations for port 443 correspond to S3 buckets, which don’t offer a Web interface without authentication, and without permission I’m not going to poke around them in any case.
Just for kicks, let’s do a DNS lookup on the mynexia.com domain:
This implies near-real time communications over an existing network connection, not something that’s polling-based. And because we’ve established that there are no inbound connections to the thermostat, this outbound connection to the Nexia system must be long-lived.
Let’s take a look at the log files to see what else we can discover. We can see several connections outbound to port 443 on the Nexia server.
And looking at a DNS log I set up – I used BIND and configured my router to use my Linux laptop as its DNS server -- we can see the mynexia.com domain resolution request:
Some of these connections are only open for a few seconds – for example the one outbound on port 44318 is opened at 15:57:09, and closed at 15:57:12. But the connection on port 50216 – opened at 15:27:41 – remains open for a long period (beyond the horizon of when I turned off the logging server that day).
This is exactly what we’d expect from the observed behavior! A long-lived outbound connection from the thermostat to the Nexia server, used to communicate commands in near real-time.
Is this secure? Let’s asses – it’s using HTTPS for the connection, which is clearly a good foundation. I can’t tell whether it’s doing any certificate validation on the mynexia.com domain. Doing so would require deploying a firewall and performing HTTPS inspection, which is beyond the scope of this article.
So let’s summarize what we learned here, and assess its security. The thermostat is only making outbound connections, and doesn’t require either an open port or (the horror of) UPnP. It’s making an HTTPS connection to its command & control system, hosted at a recognizable domain. These are all sound security approaches.
My only criticism is that it’s making an unsecured call to a Weather.com server over HTTP. This is a small but real vulnerability, since it’s subject to an MITM attack that could exploit a buffer overflow of some sort. I’m not terribly worried about upstream attackers at the ISP, but someone could create a rogue wireless access point and capture the outbound calls to the weather forecast server. Or in theory hijack my DNS, redirect the thermostat to a bogus weather forecast server, and deliver malformed data. Again – these are real but unlikely attacks.
Overall, I’m satisfied with the security of this device. I learned a lot doing this research, and I hope that you’ve found this writeup useful. Let me know what you think – I’m reachable on Twitter @JasonGarbis
Thanks!
Friday, September 8, 2017
Equifax Data Breach...
Sorry it has been a while since I last posted. As you can imagine – the world of a cybersecurity guy can be slightly busy at times!
I did want to take a moment and warn everyone about the Equifax data breach.
For those that may not have heard yet, the credit repository Equifax suffered a massive data breach, losing 143,000,000 records. The hack began in May, and was finally terminated in late July.
Equifax notified the public yesterday, but presumably they have been working with the federal law enforcement community and the various states attorneys general about the breach (as required by law. I know that their incident response procedure specifically directs that they work with the FBI to determine the source and impact of the breach before notifying the general public – I can only hope that they followed their own procedures.
I won’t go into specifics about the breach, or the failed procedures on the part of Equifax the allowed this to happen. But I did want to share a few tidbits that are important to the general public, that may help get a better understanding of the breach and how the public is affected.
Point being, YOU or someone close to you was certainly affected by this breach. So please spread the word.
Equifax has posted this report:
https://www.equifaxsecurity2017.com/
You can see if you were among those that were breached, and they will give you instructions. Ironically, if you were affected, they will sign you up for credit monitoring, but not until next week (shrug?).
I did want to take a moment and warn everyone about the Equifax data breach.
For those that may not have heard yet, the credit repository Equifax suffered a massive data breach, losing 143,000,000 records. The hack began in May, and was finally terminated in late July.
Equifax notified the public yesterday, but presumably they have been working with the federal law enforcement community and the various states attorneys general about the breach (as required by law. I know that their incident response procedure specifically directs that they work with the FBI to determine the source and impact of the breach before notifying the general public – I can only hope that they followed their own procedures.
I won’t go into specifics about the breach, or the failed procedures on the part of Equifax the allowed this to happen. But I did want to share a few tidbits that are important to the general public, that may help get a better understanding of the breach and how the public is affected.
- - Equifax has credit information on pretty much every American. They are one of three major credit repositories. In most cases of a data breach, the consumer would have had to do business with the retailer to have been exposed (such as the Home Depot credit card breach, or a records breach at a hospital or school). Not so with Equifax. You may have never heard of Equifax before today, but they have ALL of your information as one of the credit repositories.
- - There are about 325,000,000 people in the US, and the Equifax breach lost 143,000,000 records. For simplicity’s sake, that means that 1 out of every 2 people had their credit information stolen as part of this breach. That means that it was you OR your spouse. Your Mom OR your Dad. You OR your siblings. You OR your child.
Point being, YOU or someone close to you was certainly affected by this breach. So please spread the word.
Equifax has posted this report:
https://www.equifaxsecurity2017.com/
You can see if you were among those that were breached, and they will give you instructions. Ironically, if you were affected, they will sign you up for credit monitoring, but not until next week (shrug?).
Please pass this along to
anyone / everyone you know.
Monday, August 28, 2017
Addressing Network Segmentation for PCI 3.2 with the Software-Defined Perimeter
This blog originally appeared on the Cryptzone blog. You can find it here.
Most companies selling to the public – and certainly all
e-commerce companies – are required to comply with the Payment Card Industry
Data Security Standards (PCI DSS). Basically, all businesses that accept credit
card as payment must adhere to the PCI standards, and go through a
certification process on an annual basis.
While the PCI DSS is nothing new, breaches are still
occurring with alarming frequency. And those charged with protecting credit
card information are paying attention, revising the standards for security
credit card data to combat emerging threats and scenarios.
In December 2016, the PCI
DSS Council released “Guidance for PCI DSS Scoping and Network Segmentation”.
This document was created to clarify how businesses and auditors should assess
their Cardholder Data Environments (CDE). Specifically, it includes guidance as
to what systems and processes should be included as part of a PCI evaluation
and scope:
Accurate
PCI DSS scoping involves critically evaluating the CDE and CHD flows, as well
as all connected-to and supporting system components, to determine the
necessary coverage for PCI DSS requirements. Systems with connectivity or
access to or from the CDE are considered “connected to” systems. These systems
have a communication path to one or more system components in the CDE.
The guidance summaries how environment scoping should be
approached:
The
following scoping concepts always apply:
·
Systems
located within the CDE are in scope, irrespective of their functionality or the
reason why they are in the CDE.
·
Similarly,
systems that connect to a system in the CDE are in scope, irrespective of their
functionality or the reason they have connectivity to the CDE.
·
In a flat
network, all systems are in scope if any single system stores, processes, or
transmits account data.
One of the primary areas of focus is how critical network
segmentation is to reduce the overall PCI scope, as even machines that are not
directly involved with credit card processes but still able to access
Cardholder Data (CHD) *MUST* also be included as part of the PCI scope:
The
intent of segmentation is to prevent out-of-scope systems from being able to
communicate with systems in the CDE or impact the security of the CDE.
Segmentation is typically achieved by technologies and process controls that
enforce separation between the CDE and out-of-scope systems. When properly
implemented, a segmented (out-of-scope) system component could not impact the
security of the CDE, even if an attacker obtained administrative access on that
out-of-scope system.
As a best practice, and to significantly reduce the scope of
the PCI environment, companies must look to properly segmented networks to
protect their CHD.
AppGate SDP
When looking for tools to segment your networks, you can
always look to a myriad of firewall rules and antiquated third party tools that
might get you to the desired state. But the solution being evaluated and
recommended by PCI QSAs for network segmentation is the Software-Defined
Perimeter (SDP).
AppGate SDP is the industry’s best and leading Software-Defined
Perimeter solution. Properly deployed,
AppGate SDP will reduce the scope of PCI DSS and other regulatory audits by
eliminating unnecessary devices, networks and appliances from the audit.
AppGate SDP makes any resources that are not specifically granted access to an
environment invisible to the environment, thus reducing the chance of
additional devices and resources being added to the evaluation.
Many companies are evaluating their annual PCI audit results
and looking for ways to remediate outstanding control gaps, especially those
with protecting their network access.
AppGate SDP addresses these requirements, as well as many of the other
PCI controls. More information about how AppGate SDP addresses PCI 3.2
requirements can be found in this
whitepaper.
Labels:
Compliance,
Cryptzone,
Cyxtera,
microsegmentation,
PCI
Monday, May 15, 2017
Ransomware SUCKS - Here are some things you can do...
By now (unless you are living under a rock) you have heard
about the terrible WanaCry ransomware attacks infecting computers across the
planet. Seemingly, no business type is spared, and the malware isn’t just going
after businesses – lots of individuals being infected as well.
So here is a bit of info about the attack, and what
individuals and businesses can do to prevent it:
What is it:
Ransomware is software created by cyber criminals to encrypt the files on your computer, thus blocking the user from being able to use the computer without paying a fee (ransom), usually in untraceable BitCoin or in gift cards such as Amazon and iTunes.
Ransomware is software created by cyber criminals to encrypt the files on your computer, thus blocking the user from being able to use the computer without paying a fee (ransom), usually in untraceable BitCoin or in gift cards such as Amazon and iTunes.
In this latest iteration of ransomware, the bad guys used an
exploit discovered and released that was part of an information leak from the
NSA, one that attacks a specific communications system on Windows
computers. Microsoft released a patch
for this in March 2017 to address the issue (MS17-010,
which can be found here),
but those without the patch are very much at risk of getting the malware on
their computers.
What can individuals do:
Individuals should consider the following in regards to protecting their computer:
Individuals should consider the following in regards to protecting their computer:
Windows Update: Make certain that your windows update is set
to automatically download and install any critical updates. Windows update is generally located in your
control panel, but may be in a different location depending on the version of
Windows that you are running.
Install Anti-virus: While certainly not a catch everything solution,
find a good anti-virus program for your computer. There are lots of options out there – if you
have high speed Internet, there is likely a free download from your Internet
provider as part of your Internet service.
Check with their websites for more information about downloading and
installing this free AV software. If you
do not have high speed Internet, there are still free options available. AVG and several other companies offer very
good and fast anti-virus software for your computer. There is really no excuse NOT to have
anti-virus software on your computer any longer, and it can act as a first line
of defense to protect you from the bad guys.
Regular Backups: If you become infected, the only way to get
your files back (without paying the ransom) is to restore from a backup of your
files. You can back up your data to the cloud
– lots of very inexpensive services out there that can do this for you. Or you
can try to do it yourself and backup to an external hard drive – again, very
inexpensive drives are available and easy to use. They can be found pretty much anywhere
(Amazon, Wal-Mart even Sam’s Club had them on sale this past weekend). Those
pictures that you took over the weekend for Mother’s Day cannot ever be
replaced, so invest some time and effort on a good backup solution.
Be Aware on What You Click: Lastly, nothing mentioned above
will protect you from everything the bad guys can throw at you. You should be mindful about the websites you
visit, the emails you open, and the applications you install. If you do not know the source of an email or
application, DO NOT OPEN IT! If you don’t know if the website is reputable,
probably not the best site to visit. Be smart about the things you see and do
on your computer – a little common sense will save you from these kinds of
nasty viruses.
What IT Pros should do:
In addition to everything listed above (which I would certainly hope is already happening in your organization), consider implementing technology that help segment your networks, making malware such as WanaCry less invasive. Cyxtera CISO Leo Taddeo presented the Software-Defined Perimeter is a viable solution / technology to combat these kinds of threats. You can see his CNBC interview here:
In addition to everything listed above (which I would certainly hope is already happening in your organization), consider implementing technology that help segment your networks, making malware such as WanaCry less invasive. Cyxtera CISO Leo Taddeo presented the Software-Defined Perimeter is a viable solution / technology to combat these kinds of threats. You can see his CNBC interview here:
Firewalls and VPNs are decades old technology, and the bad
guys create their viruses to take advantage of these antiquated technologies. A software-defined perimeter creates an
individualized network, specific to the resources authorized for a specific user. In addition to dynamic condition checking, it
is designed to contain a user to only places that they are authorize to go,
thus protecting a majority of your company’s resources.
You will hear more about solutions to defend your computers
and network in the coming days and weeks from every security / technology pundit
out there (likely me included). Regardless of the solutions that you choose to
augment your security and networks, make certain that it is one that is on the
cutting edge of today’s technology, with a strong vision of how to deal with the
emerging threats of the future.
Labels:
Cyxtera,
Data Security,
Ransomware,
Security Awareness
Thursday, May 4, 2017
Star Wars Day - Revisited!
It just wouldn't be a Star Wars Day without me posting something about it. And I decided to revisit and report my "Empire Information Security Failures" blog from last year, as it was extremely well received. You can find the original post for this year's blog on the Cryptzone website here.
To celebrate Star Wars Day, I thought I would share a few ways in which Information Security best practices where not adhered to by the Empire, and enabled the Rebels to win.
To be clear: I do not support the Empire, the Sith Lords nor any other types of scum and villainy. Nor am I trying to portray the Rebel Alliance as a weird, Force wielding, Galactic Hacker consortium or something. But had the Empire not been so lax in their security controls, Emperor Palpatine and his buddies might have been able to bring their “order and peace” to the galaxy.
Social Engineering: Social engineering is an attack that uses human interactions and plays on human weaknesses to break established security procedures.
Scene: Luke and Han, dressed as Stormtroopers, escorting Chewbacca to the prison block (Star Wars IV: A New Hope).
Lots of things going on here. First, no one wants to mess with a Wookie. So others were less likely to get involved when they saw that the Wookie was being escorted by two (only two) Stormtroopers. Luke and Han knew that if they looked like they knew what they were doing, they could walk around in plain sight without being questioned by anyone. Even after arriving at the detention block, the supervising guard did not suspect them as being bad guys, and only questioned them on a matter of paperwork. Sure, everything fell apart at that point — one of the security controls finally kicked in. But Luke, Han and Chewie were able to walk pretty much anywhere they wanted on the Death Star by exploiting social engineering flaws.
Lesson: People — not just the bad guys — exploit social engineering gaps every day. When was the last time you piggybacked someone into a controlled building? The really bad guys know this as well, using our politeness (holding a door open for someone) against us. It is extremely hard to break those habits, which is why your security guys are constantly reminding you about them. Who knows if the guy you are holding the door for is coming to blow up the building (or the Death Star)?
Identity and Access Management: Identity and access management is the system used by entities to allow and prohibit access to resources controlled by the entity.
Scene: Luke, Leia, Han and Chewie on the Shuttle trying to land on Endor (Star Wars VI: Return of the Jedi)
The Rebels have stolen (property theft, probably due to lack of physical security controls on the part of the Empire) a small Imperial shuttle and are landing a team on Endor to blow up the shield generator protecting the second Death Star. Apart from using the Imperial shuttle, the Rebels have also stolen a security code that will allow the shuttle to land on the forest moon. There are multiple points that the code could have been rejected, with the admiral even claiming that it was an older code. Eventually, the Rebels are given clearance and allowed to land.
Lesson: Identity and Access Management is a difficult topic for most businesses. Larger business MUST have a solution for IAM in place, as their employees turn around much more frequently than in smaller companies. And unfortunately, there are always gaps — the employee who was terminated months ago still has an active security badge, because the two system are not connected, and the administrator of the badge system was not notified (or on vacation or whatever) that the employee was no longer with the company. All business need to have controls in place and audited regularly to make certain that there are as few gaps as possible.
Data Security: Data Security includes the methods used by an entity to protect all manners of data from those not authorized to use it.
Scene: Princess Leia and her crew intercept the technical plans to the Death Star (Star Wars IV: A New Hope)
Lesson: The Empire should have done a better job of securing the plans. In Rogue One: A Star Wars Story, we find out that the data was stored in the Imperial library on Scarif. We don’t know if the data drive that Jyn Erso stole was encrypted or not (another tenet of data security), but even if it was encrypted at rest, it was transmitted using an unsecured methodology, allowing the Rebel Alliance to intercept them (and break the encryption, if necessary). Most companies and entities have intellectual property / trade secrets / military secrets that they don’t want others to have. Not only should that data be encrypted and protected, but the networks and devices that send and store the data need to be protected as well.
Some of these examples are a bit convoluted, and I am sure there are some out there that would like to debate the finer details of exactly what happened in the movie (message me — we can talk specifics (I had to amend some things for brevity’s sake)). But the point is that Star Wars Day is just another opportunity to remind you (and your employees and everyone else) about the importance information security has on so many aspects of our lives. If Star Wars makes that point a little more enjoyable, then I’ve accomplished that goal!
Enjoy the day, and “May The Fourth” be with you!
To celebrate Star Wars Day, I thought I would share a few ways in which Information Security best practices where not adhered to by the Empire, and enabled the Rebels to win.
To be clear: I do not support the Empire, the Sith Lords nor any other types of scum and villainy. Nor am I trying to portray the Rebel Alliance as a weird, Force wielding, Galactic Hacker consortium or something. But had the Empire not been so lax in their security controls, Emperor Palpatine and his buddies might have been able to bring their “order and peace” to the galaxy.
Social Engineering: Social engineering is an attack that uses human interactions and plays on human weaknesses to break established security procedures.
Scene: Luke and Han, dressed as Stormtroopers, escorting Chewbacca to the prison block (Star Wars IV: A New Hope).
© Disney / Lucasfilm |
Lesson: People — not just the bad guys — exploit social engineering gaps every day. When was the last time you piggybacked someone into a controlled building? The really bad guys know this as well, using our politeness (holding a door open for someone) against us. It is extremely hard to break those habits, which is why your security guys are constantly reminding you about them. Who knows if the guy you are holding the door for is coming to blow up the building (or the Death Star)?
Identity and Access Management: Identity and access management is the system used by entities to allow and prohibit access to resources controlled by the entity.
Scene: Luke, Leia, Han and Chewie on the Shuttle trying to land on Endor (Star Wars VI: Return of the Jedi)
© Disney / Lucasfilm |
The Rebels have stolen (property theft, probably due to lack of physical security controls on the part of the Empire) a small Imperial shuttle and are landing a team on Endor to blow up the shield generator protecting the second Death Star. Apart from using the Imperial shuttle, the Rebels have also stolen a security code that will allow the shuttle to land on the forest moon. There are multiple points that the code could have been rejected, with the admiral even claiming that it was an older code. Eventually, the Rebels are given clearance and allowed to land.
Lesson: Identity and Access Management is a difficult topic for most businesses. Larger business MUST have a solution for IAM in place, as their employees turn around much more frequently than in smaller companies. And unfortunately, there are always gaps — the employee who was terminated months ago still has an active security badge, because the two system are not connected, and the administrator of the badge system was not notified (or on vacation or whatever) that the employee was no longer with the company. All business need to have controls in place and audited regularly to make certain that there are as few gaps as possible.
Data Security: Data Security includes the methods used by an entity to protect all manners of data from those not authorized to use it.
Scene: Princess Leia and her crew intercept the technical plans to the Death Star (Star Wars IV: A New Hope)
© Disney / Lucasfilm |
The very first scene in the very first movie (yes, the original Star Wars will ALWAYS be the first movie to me) starts with an epic space battle — the Empire is beating up a Rebel blockade runner that happens to be carrying Princess Leia and the technical plans or the first Death Star. The Rebels had intercepted those plans, and the Princess was in the process of delivering those plans back to her home world when she was captured. The Rebels had been a thorn in the side of the Empire to that point, but now they had the data necessary to severely cripple the Emperor’s plans of galactic domination using the Death Star.
Some of these examples are a bit convoluted, and I am sure there are some out there that would like to debate the finer details of exactly what happened in the movie (message me — we can talk specifics (I had to amend some things for brevity’s sake)). But the point is that Star Wars Day is just another opportunity to remind you (and your employees and everyone else) about the importance information security has on so many aspects of our lives. If Star Wars makes that point a little more enjoyable, then I’ve accomplished that goal!
Enjoy the day, and “May The Fourth” be with you!
Monday, May 1, 2017
Hybrid Cloud - Yes,You can!
I recently posted this blog on the Cryptzone website. You can find the original posting here.
I was recently with 7,500 of my closest Amazon AWS friends at the AWS Summit in San Francisco. Generally, when you go to an AWS conference, the talk is ONLY about AWS: the latest features, implementation and design, or optimization of the AWS configuration. And certainly – those conversations were happening. But from my vantage point in the Cryptzone booth, there was another conversation, one that I touched on a bit in my previous recap blog. People at an AWS conference are finally talking about the hybrid cloud.
The concept of a hybrid cloud is not new – in fact, it has been around long before the term was even coined. But the fact that customers / potential customers are searching for ways to integrate their AWS or public cloud infrastructure with their on premises resources is exciting to me for a number of reasons:
1. Reality Check: For years, I have been preaching the benefits of a hybrid cloud solution. It never seemed realistic to me that an established company would dump 100% of all of their business workloads on a public cloud. Sure, your company could have been “born in the cloud” and optimized from the start to use only cloud-based resources. Some of those companies exist (and are THRIVING, BTW). But most companies that I have chatted with have adopted the cloud over time, meaning that they are in the process of migrating existing on premises workloads to a cloud infrastructure. I think that is great! Testing the waters in a measured and calculated fashion is often the best and most cost productive way of taking advantage of cloud resources. Of course, those in the public cloud space would like you to move a little faster, but conducting thorough evaluation of cloud solutions while maintaining your on premises environment just makes sense.
2. Manageability: One of the many things that has been a barrier to public cloud adoption is the ability to manage users and resources in the public cloud with the same tools used on premises. Who wants to manage multiple IAM solutions? Also, users that attach to the cloud need to be able to do so without going through a dozen authentication steps. Simply put, IT administrators are hesitant to expose their users to any additional processes or environments that will exponentially increase the IT admin’s workload. Can you blame them? On this front, the great news is that the management solutions for hybrid cloud infrastructures are becoming more mature EVERY DAY! Because of this, those IT admins are not as skeptical about adding another layer of infrastructure to their environments, especially if they can all be managed without any significant changes to how the user would consume that infrastructure.
3. Scalability: Moving workloads to a public cloud environment has always been about the ability to scale up a workload with very little effort – it is as simple as setting up an AWS account, starting up an instance, and deploying the workload. Easy peasy. Developers have realized this for a while now – creating testing environments for QA, demo and proof of concept for years. It also created a stealth IT problem (something that we will address in a different blog at some point). Traditional IT (and their risk managers, executives, and line of business decision makers) have become more and more comfortable with moving workloads to the cloud, and the ability to expand the technology footprint into this space is very appealing, not only from a time-to-market rationale, but from the enormous cost savings. And the inherent barrier of hybrid cloud integration and management preventing rapid growth has pretty much disappeared.
As an IT professional, business leader or decision maker, once you cross that hump and gain comfort with having a hybrid cloud architecture for your company, you start to realize the benefits of having that kind of environment (again, the subject of a future blog). AppGate, from Cryptzone, is the perfect tool to bridge your on premises workloads with your AWS or other cloud provider environment(s).
I challenge you to explore the tools and capabilities that are constantly be invented and revised to help your company embrace the benefits of a hybrid cloud architecture!
I was recently with 7,500 of my closest Amazon AWS friends at the AWS Summit in San Francisco. Generally, when you go to an AWS conference, the talk is ONLY about AWS: the latest features, implementation and design, or optimization of the AWS configuration. And certainly – those conversations were happening. But from my vantage point in the Cryptzone booth, there was another conversation, one that I touched on a bit in my previous recap blog. People at an AWS conference are finally talking about the hybrid cloud.
The concept of a hybrid cloud is not new – in fact, it has been around long before the term was even coined. But the fact that customers / potential customers are searching for ways to integrate their AWS or public cloud infrastructure with their on premises resources is exciting to me for a number of reasons:
1. Reality Check: For years, I have been preaching the benefits of a hybrid cloud solution. It never seemed realistic to me that an established company would dump 100% of all of their business workloads on a public cloud. Sure, your company could have been “born in the cloud” and optimized from the start to use only cloud-based resources. Some of those companies exist (and are THRIVING, BTW). But most companies that I have chatted with have adopted the cloud over time, meaning that they are in the process of migrating existing on premises workloads to a cloud infrastructure. I think that is great! Testing the waters in a measured and calculated fashion is often the best and most cost productive way of taking advantage of cloud resources. Of course, those in the public cloud space would like you to move a little faster, but conducting thorough evaluation of cloud solutions while maintaining your on premises environment just makes sense.
2. Manageability: One of the many things that has been a barrier to public cloud adoption is the ability to manage users and resources in the public cloud with the same tools used on premises. Who wants to manage multiple IAM solutions? Also, users that attach to the cloud need to be able to do so without going through a dozen authentication steps. Simply put, IT administrators are hesitant to expose their users to any additional processes or environments that will exponentially increase the IT admin’s workload. Can you blame them? On this front, the great news is that the management solutions for hybrid cloud infrastructures are becoming more mature EVERY DAY! Because of this, those IT admins are not as skeptical about adding another layer of infrastructure to their environments, especially if they can all be managed without any significant changes to how the user would consume that infrastructure.
3. Scalability: Moving workloads to a public cloud environment has always been about the ability to scale up a workload with very little effort – it is as simple as setting up an AWS account, starting up an instance, and deploying the workload. Easy peasy. Developers have realized this for a while now – creating testing environments for QA, demo and proof of concept for years. It also created a stealth IT problem (something that we will address in a different blog at some point). Traditional IT (and their risk managers, executives, and line of business decision makers) have become more and more comfortable with moving workloads to the cloud, and the ability to expand the technology footprint into this space is very appealing, not only from a time-to-market rationale, but from the enormous cost savings. And the inherent barrier of hybrid cloud integration and management preventing rapid growth has pretty much disappeared.
As an IT professional, business leader or decision maker, once you cross that hump and gain comfort with having a hybrid cloud architecture for your company, you start to realize the benefits of having that kind of environment (again, the subject of a future blog). AppGate, from Cryptzone, is the perfect tool to bridge your on premises workloads with your AWS or other cloud provider environment(s).
I challenge you to explore the tools and capabilities that are constantly be invented and revised to help your company embrace the benefits of a hybrid cloud architecture!
Thursday, April 20, 2017
AWS Summit San Francisco...
I posted this blog on the Cryptzone website after the AWS Summit. You can find the original posting here.
Another great Amazon conference just wrapped up. The AWS Summit in San Francisco was earlier this week, and 7500 of my closest Amazon friends met at Moscone West to learn about the latest from AWS and their partners.
Amazon does not traditionally make any major announcements at these Summits (they save those for the re:Invent conference in December), but they did make a couple anyway: A SaaS licensing model (in addition to the other models that they have) and a code writing interface called CodeStar for writing optimized application on the AWS platform. You can read about these and all of the other announcements here.
We had hundreds of people stop by the Cryptzone booth, interested in learning how AppGate can help secure their AWS and hybrid environment(s) and more about the Software-Defined Perimeter (SDP). And learning (for me at least) is always a two way street – I am constantly probing and prodding for the real world concerns that our customers and potential customers might be having. Here are the most common themes that I heard about while visiting with exhibitors and attendees in the expo hall:
ANOTHER Security Product for AWS: Yes, as you might imagine, there were MANY security vendors at the Summit (and at re:Invent), all claiming that you need to buy their product or your AWS environment will perish and be wiped from the Earth. Well, as much as I appreciate the zeal of our competitors in the security space, those attending AWS are a bit more sophisticated than that – they understand that security in AWS may not be perfect, but it is pretty decent for what their requirements are, and that any third party security solution needs to address specific shortcomings that they see in their environments. The sky is not falling, and they are looking for a partner that will make their enterprise more secure and easier to manage.
Addressing the Hybrid Cloud: It is almost blasphemy to discuss environments that are not AWS while at an AWS Summit. But the fact is that every person I talked to had workloads that were NOT located exclusively in the AWS cloud – every one of them had some kind of hybrid environment. Connecting and managing those separate environments is a challenge, and IT professionals are looking for ways to solve this challenge. Thankfully, AppGate is the solution!
Compliance is Lurking: While seemingly never front and center at these events, addressing regulatory compliance considerations is always in the back of people’s minds. So many of the security solutions on the market are purchased – at least in part – to address a compliance-related concern. Security professionals often to not have the luxury of purchasing a tool only for compliance reasons. They are very aware however (and are showing greater awareness) of how a particular tool can be used to address compliance regulations while solving their security needs.
As I said – great conference, and I am looking forward to the future AWS Summits / conferences / meetups!
Another great Amazon conference just wrapped up. The AWS Summit in San Francisco was earlier this week, and 7500 of my closest Amazon friends met at Moscone West to learn about the latest from AWS and their partners.
Amazon does not traditionally make any major announcements at these Summits (they save those for the re:Invent conference in December), but they did make a couple anyway: A SaaS licensing model (in addition to the other models that they have) and a code writing interface called CodeStar for writing optimized application on the AWS platform. You can read about these and all of the other announcements here.
We had hundreds of people stop by the Cryptzone booth, interested in learning how AppGate can help secure their AWS and hybrid environment(s) and more about the Software-Defined Perimeter (SDP). And learning (for me at least) is always a two way street – I am constantly probing and prodding for the real world concerns that our customers and potential customers might be having. Here are the most common themes that I heard about while visiting with exhibitors and attendees in the expo hall:
ANOTHER Security Product for AWS: Yes, as you might imagine, there were MANY security vendors at the Summit (and at re:Invent), all claiming that you need to buy their product or your AWS environment will perish and be wiped from the Earth. Well, as much as I appreciate the zeal of our competitors in the security space, those attending AWS are a bit more sophisticated than that – they understand that security in AWS may not be perfect, but it is pretty decent for what their requirements are, and that any third party security solution needs to address specific shortcomings that they see in their environments. The sky is not falling, and they are looking for a partner that will make their enterprise more secure and easier to manage.
Addressing the Hybrid Cloud: It is almost blasphemy to discuss environments that are not AWS while at an AWS Summit. But the fact is that every person I talked to had workloads that were NOT located exclusively in the AWS cloud – every one of them had some kind of hybrid environment. Connecting and managing those separate environments is a challenge, and IT professionals are looking for ways to solve this challenge. Thankfully, AppGate is the solution!
Compliance is Lurking: While seemingly never front and center at these events, addressing regulatory compliance considerations is always in the back of people’s minds. So many of the security solutions on the market are purchased – at least in part – to address a compliance-related concern. Security professionals often to not have the luxury of purchasing a tool only for compliance reasons. They are very aware however (and are showing greater awareness) of how a particular tool can be used to address compliance regulations while solving their security needs.
As I said – great conference, and I am looking forward to the future AWS Summits / conferences / meetups!
Monday, January 23, 2017
Beards - Make Faces Great Again...
Lately, I've been trying to stay out of the social media world. Too much has been going on, and social media has been a VERY caustic place, regardless of the topic or view. But I promise that I haven't fallen off the earth. You will see more from me as we approach the RSA conference in a couple of weeks, I promise.
In the meantime, a former colleague and very good friend sent me this sticker.
Again, no political message intended. But this site *IS* The Security Beard, right?
In the meantime, a former colleague and very good friend sent me this sticker.
Again, no political message intended. But this site *IS* The Security Beard, right?
Subscribe to:
Posts (Atom)