Entities enlisting the services of red teams generally have their house in order to include management and monitoring infrastructure to assist with operations or security functions. This infrastructure presents as low-hanging fruit on offensive engagements as it meets a number of priority conditions:
1 - It will have some level of network adjacency or access to anything it is managing or monitoring
2 - It will have some level of interface with the subject(s) under management/monitoring (SNMP, SSH, even socket/application layer negotiations)
3 - It is likely to have some level of privileged access to the subject(s)
Anything with proximity, contact, and likely privilege in a single box is surely well secured, right? A search on most engines for the names of these platforms and key terms such as "CVE" or "Vulnerability" should shed some light...
In the real world where engineering relies on these platforms for their operational lifeblood, the uptime requirements on these back-end pieces of infrastructure result in rare patching/review/hardening if any to prevent "breaking stuff we need." Go a bit outside of enterprise (or look around inside some) and the situation gets worse with talent and budget constraints. Further down-market, where any such systems may be on precanned defaults, unpatched from the day they were built, with wide open access to all destinations. Even a denial of service to one of these things causes blind spots and ops degradation. Full compromise is catastrophic.
To recap from the *last post*, protecting a digital asset is much the same as a physical one: each function must be provided with no more privilege and proximity than needed to run, and each vector of access to the asset must be observed and defended in zones of control. For this example, consider an infrastructure monitoring platform consisting of multiple scheduling, data processing, storage, connectivity, and interface daemons running on a Linux OS and accessible to the ops team via a web portal. Backed by a mysql database which contains connection data for monitoring subjects, the target presents as a number of assets requiring protection:
1 - Network proximity which can be abused by gaining a shell on the host or reflecting through its services (pivot point)
2 - All of the binaries and interpreters required to run the monitoring platform, an internal armory.
3 - Information stored in the database and service caches providing credentials to subject systems (and other pre-collected system configuration data).
4 – The application interface of the webUI/API which takes input, processes it, and returns results (even in authentication attempts).
These assets are reachable by a known set of vectors:
1 - Services such as the web UI, SSH, SNMP, syslog, and other bound listeners over the network (server-side)
2 - Information pulled in to the system from subjects and processed by data parsers and database calls ("client-side" attack vectors)
2a - Host-level access if the collection or data parsing service being exploited does not provide privilege sufficient to reach the asset (privesc)
Which provide the basic scope of requirements for hardening measures to implement:
1 - Restrict network services to the minimum possible ingress profiles (network ACLs)
1a - Administrative services and data collection services have wholly separate connectivity requirements
1b - Numerous monitoring protocols utilize unidirectional UDP communications permitting restriction of either inbound or outbound traffic for the service entirely
1c - Internal infrastructure services such as message queues or memory caches need not be exposed to the network at all (even if bound to 0.0.0.0).
2 - Implement contextual filters for information passed to the service
2a - Strict cipher specifications to ensure consistent states on SSH and TLS
2b - Application layer filters for the web UI
2c - Syslog/SNMP/etc filters for inbound submission
2d - Ingestion filter for data pulled in by query mechanisms
3 - Minimize contact and privilege for each service
3a - Prevent services, via jails/namespaces/gradm, from accessing common resources (files, network, cycles, memory)
3b – Restrict processing of untrusted data to isolated environments with no network or filesystem access during execution of the processing phase
4 - Reduce impact of attacks on remaining exposed surfaces
4a - Implement probabilistic and deterministic defenses at the binary level for userspace and kernel
4b - Implement reactive measures to clobber the caller in privilege escalation attempts or block the source address in remote attacks
5 - Create informational feedback loop tying responsive and inline defenses into a single source of truth.
5a - Use this to also feed the human element telemetry as they will be required when the autonomous response runs out of "known tricks" or becomes intelligent enough to stop caring
Meeting the requirements set out in this scope will not turn the objective into an impregnable fortress, but it will raise the bar, likely above the threshold where the system is still a "sweet target" as the complexity of relevant compromise exceeds going after the systems it monitors in the first place. How the requirements can be fulfilled even by small organizations is the subject of our next post: Atomic Shelters for Active Observers - Hardening by Example (part 2).