Businesses, both public sector and private, should take notice. NIST has published a revision (NIST 800 171 Rev 2) on protecting your environment from modern threat vectors. The following is Oxalis’ interpretation of the changes and how to address them as a CISO or security professional.
Below we delve into what the latest NIST 800 171 revision 2 means to your business.
On June 19, 2019 the National Institute of Standards and Technology, also known as NIST, distributed new recommendations in a drafted document on NIST 800 171 Rev 2, titled “NIST Special Publication (SP) 800-171 Revision 2: Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations.” This is a recent revision to the original SP 800-171 document release in 2015 which encourages or requires more robust security regarding Controlled Unclassified Information, referred to as CUI. Aimed to protect data housed in nonfederal systems and organizations that support critical programs, NIST emphasizes the heightened risk of exposure of CUI to sophisticated adversaries.
As cyber-attacks become cyber-warfare through Advanced Persistent Threats (APT), NIST offers long term recommendations to be utilized by companies in both public and private sectors as risks continue to strengthen. On-Premise servers are no longer seen as “trust fortresses”, shifting to models of “no/low-trust” environments as previously reliable internal servers are often susceptible to these APTs that are risking the security of internal data.
Learn how you can enhance your data security with 5 Main Takeaways from NIST’s SP 800-171B to Decrease Cyber Insecurity.
1. Destroy Deviations before Detriment: Elevated Configuration Management
NIST expresses a necessity for increased automation in order to support reliance on configuration management. Instituting an authoritative source and repository utilizing a system inventory of approved hardware, firmware and software components, approved baseline configurations and changes, verified system software and firmware as well as scripts and images. This inventory will allow automation to detect deviations from established baseline configurations and restore stored components from a trusted source, allowing for a consistent desired state to compare against actual states of systems at a current moment.
What does this mean for your business?
- While organizations may have documentation for their infrastructure and its configuration, automated checks are rare and impossible in cases where manual deployments and physical assets are the norms. In a cloud environment and “infrastructure as code” scenarios, automated tools for deployment, asset inventory and drift reporting are the norm.
- Transitioning your workflows from servers to cloud-based systems can be imperative when attempting to combat APT.
- Think about leveraging AWS Cloud Trail to detect unauthorized anomalies in your infrastructure.
2. Why You Should Not Trust your “Fortress” of a Server
Transitioning to a “no/low trust model” means forgetting what you once knew. Servers on-premise are no longer the beacon of security they used to be and NIST encourages systems to advance security measures to limit the risk of spooling. It is important that users and their system components are identified and verified before allowing access to internal servers or networks. This can be done utilizing a system of bidirectional authentication, such as keychain storage or a Trusted Platform Module (TPM), which are cryptographically based and prevent hackers from replay access. This automated system can also help to prevent unnecessary or detrimental connectivity to unknown or unverified system components, categorizing these components into separate networks. Placing unwanted or unknown components into remediation or quarantine networks allows time for appropriate mitigations before potential chaos develops.
What is the key take away here?
- On-premises IT is by its nature fixed – while security keys can be rotated, the servers are fixed. To dissuade advanced attackers the new guidelines suggest constantly changing the systems themselves (3.13.2), rotating service windows, and generally not providing consistency or patterns that can be exploited.
- With cloud deployments, you can change servers, availability zones, sides of the country or even cloud providers with minimal service interruption that would be impossible with any localized deployment.
- Think about leveraging Cloud Formation and containerized solutions to allow for portability and work this into your roadmap
3. Homogenous Information Technology Environments: Friend or Foe?
Though homogenous environments may appear to be an adequate solution at a relatively low price, these environments tend to end in weakened security by providing access to various data sets in one place. When accessed by APTs, falsified codes can often be propagated throughout internal systems then translated across similar internal components, leading to disaster. In order to combat the spread of fabricated data, the implementation of adversary techniques, tactics, and procedures, also known as TTP, encourages increased diversity within internal systems. Heterogeneous or diverse technology systems help to impede the efficiency and ability of rapidly propagating malicious code.
4. Forget Persistency, Become a “Moving Target”
Cyber-warfare is often waged with a certain degree of predictability and certainty as it centers around attack surface which remains consistent. Servers are fixed entities that often do not rotate regularly creating your data to become susceptible to APTs without implementing dynamic change. Systems can become predictable after time periods of stagnation, which allows the opportunity for APTs to easily gain access to systems with familiarity. Randomly creating change within a system allows a sense of unpredictability from an external viewpoint. This unpredictability will allow valuable time when pursuing an APT or produces unsuccessful outcomes when impeding an APT’s access.
5. Refresh Yourself: Institute Biannual System Integrations
Can you destroy and re-create your on-prem infrastructure from an authoritative configuration source twice a year? New guidelines released from NIST call for biannual integration of systems and system components from a trusted source in order to displace any adversaries which have established a presence in your infrastructure. By implementing systematic refreshes, system components become non-persistent which creates doubt and uncertainty for APTs. Under this method, Adversaries often are unable to spoof as the timeline limits abilities to successfully complete collect of CUIs. Consistently refreshing system components and services that continuously occur with sufficient frequency impede the ability of APTs to gain access to crucial information.
What does this all mean?
- Infrastructure as code becomes necessary to “build” your environment.
- Not only does infrastructure as code allow for disaster recovery, blue green deployments, and configuration automation, it is now beneficial to rebuild your infrastructure to throw off, displace, or eliminate established threats and breaches.
As technology and infrastructure continue to advance, businesses must be able to consecutively evolve with standards and regulations. Navigation and implementation of dynamic changes become confusing and overwhelming, wasting valuable time and effort throughout the process.
Luckily, oxalis.io is here to help! We have a wide range of experience and knowledge consulting across various industries with dynamic needs. We understand processes and implementations that often appear overwhelming or unrealistic. Oxalis can help provide what it takes to help your business navigate success within a world of change. Want to learn how your company can improve practices within new NIST recommendations? Fill out the form below to register for a free demonstration of Oxalis’ consulting services.