#HowTo Improve the Security of Your Containers

Written by

Containerization is a hot topic for a variety of reasons: flexibility, scalability, but also security.

Look at the focus of containerization: separate environments per transaction, limited life span of a container, visibility of containers requirements at run-time, reuse of hardened base images, and continuous testing and validation prior to production environment. Simply put, in order to implement this properly, an organization must embed security by design from the start, but also carry this through into the run-time of the containers.

Consider the current visibility you have within your organization, is the network flat or has the network been logically separated through solutions such as: VLANs, firewalls, trust zones, subnets, and routing rules? Are there firewalls implemented between and on end devices, limiting services/applications from communicating without prior validation, does deep packet inspection happen between trusted zones, on devices, and/or as data leaves the network? Has the network been designed to consider containers? The most likely scenario is, a few of these options exist but in varying degrees of maturity. 

Whilst denial of service attacks aren’t hacking, they can cause an incident and impact the technology infrastructure, possible loss of availability of assets, when say the host running your containers goes down. Further still, can a malicious actor use this, as a way to manipulate a container to take more resources than it should, and ultimately impact the other containers sharing the same host?

We know security by design is embedding from the start, both implementing robust security focused solutions and a security mind-set, some considerations technicians can begin with are looking at:

  • Reduce: access that a container can gain when at run-time, do they require that authority? Can you set a cap of resources and time to live? 
  • Reuse: secured/hardened base images that have been validated against use and stress tested, build code to be reused, which requires not hard coding in secrets. 
  • Remove: unneeded packages, no longer required infrastructure (i.e. images no longer being used, transactions no longer going to be run and containers no longer going to be called), and always validate the integrity of third party scripts that are being referenced. 

As I have said before, without visibility into your, without awareness of the transactions, containerization can be a massive black hole of security concerns. This is because, at the speed of containers called and then killed, it is almost impossible without a baseline and automated monitoring that the operations teams would be able to keep up with the environment and possible incidents.

That being said, what are some considerations organizations should put into layering on the controls to enhance their security posture with containerizations’ speed?

Containers and Their Runtime Security Challenges


Traditionally networks were looked at as hard exterior and soft interior, because the malicious actors are on the outside right? Whilst we know the majority of breaches are coming from external actors, 70% according to the 2020 Verizon Data Breach Investigation Report, the reality is a focus on “keeping them out” will only mean when they do “get in” there’s little in the way of mitigating their attack.

Consider an incident where stolen credentials are used, not only does this mean the malicious actor has gained access, they’re using an authorized account to do it. What about when a phishing email is sent, and seemingly legitimate actions means an insider is actually assisting that external malicious actor, say by providing them remote access – then, whilst not the traditional view of an “insider attack”, it is still making use of legitimate accounts to do something unexpected.

When it comes to containerization, short lived environments, still need to be protected, and to do this, we need to understand what is expected i.e. the baseline, compared to what is currently happening. This can be done through things like trust zones, limited communications like firewalls, deep packet inspection to validate what is going on.

Affective Alerting

When we know what is happening, we can then ‘alert’ on the unexpected. There are solutions out there, such as PathSolutions’ TotalView, additional to the configured alerts has a menu called Gremlins – which identifies unusual activity for that interface at that time. That’s how we should be considering a holistic security program, identify known signatures of malicious actors/attacks, identify ‘normal’ i.e. baseline of typically activities and alerting on anomalous, highlight/investigate the gremlins.

Training and expertise

Any time a new solution is implemented, it is vital the operations teams are provided training. Changing the concept of how infrastructure works, from persistent to short lived environments, is a major change that not only requires security controls and training, it requires reviewing the lifecycle approach and architecture.

As you can imagine, there’s different skillsets involved here. Having your team trained up by experts, such as certification programs and technology focused, i.e. using Docker? Take Docker specific security training. Kubernetes? Review Kubernetes Security whitepapers, take training, and make use of existing benchmarking resources. Another great layer to this is having a mentorship program to ensure team members have people to ask for help.

Layered controls, even the seemingly minuscule 

One issue I have found working within the security industry is silly belief that nothing less than perfection is beneficial – reality is, the majority of attacks are opportunistic. To avoid being breached means embedding security foundations, and working on continuous improvement in order to be more difficult than the next company or target.

Whilst it is correct to say something like a vulnerability scanner isn’t absolutely perfect, this and other controls designed to provide holistic coverage within your environment will enhance the program.

Also implement security policies and groups for automated application to containers, and automate as much as possible on the security side to keep up with requirements. Just because containers are short lived, doesn’t mean their impact is – making use of logging and monitoring, with an appropriate retention policy is another important part of the security posture.

What’s hot on Infosecurity Magazine?