Protecting Your Assets: The Crucial Role of Physical Security in Cybersecurity

Executive Summary

Physical and cyber security are not adjacent disciplines that occasionally overlap. They are the same problem viewed from different angles, and treating them as separate functions with separate budgets and separate governance produces the exact vulnerability that the most consequential breaches of the past decade have exploited.

This paper is built entirely on documented incidents, not design principles. The RSA Security breach of 2011, Target 2013, SolarWinds 2020, the Twitter insider attack of 2020, the Oldsmar water plant 2021, and the MGM Resorts social engineering attack of 2023 all exploited the physical-cyber boundary: the place where a physical action — entering a room, inserting a USB drive, sitting at a terminal, convincing a colleague — delivered a cyber effect. In every case the cyber security architecture failed because the physical access assumption it rested on was wrong.

The integrated control architecture this paper specifies addresses the physical-cyber boundary directly — not as a set of principles to aspire to but as a set of engineering controls that the documented evidence base shows are necessary and sufficient to prevent the attack patterns described.

1. The Evidence Base — Six Incidents Where Physical Access Enabled Cyber Effects

The following six incidents are the analytical foundation of this paper. Each is selected because it demonstrates a specific mechanism by which physical access — or the failure of physical security controls — enabled a cyber effect that the digital security architecture alone could not have prevented. All incident data is sourced from named primary documents.

1.1 RSA Security — March 2011: The Workstation as the Entry Point

What happened: An RSA Security employee received a phishing email with an Excel attachment containing a zero-day Flash exploit (CVE-2011-0609). The employee opened the attachment from their workstation in RSA's open-plan office. The exploit installed a remote access trojan, establishing an initial foothold from which attackers exfiltrated the seed values for RSA's SecurID two-factor authentication tokens. Those seed values were subsequently used to compromise Lockheed Martin's defence network — an RSA SecurID customer — in a separate attack weeks later.

The physical dimension: The workstation from which the breach originated was in an open-plan office environment with no physical access restriction distinguishing it from any other desk. Any visitor, contractor, or colleague could have reached it. The employee who opened the attachment was under no physical security control at the time — no supervised access zone, no clean desk enforcement, no screen privacy filter. The physical access assumption underlying RSA's security architecture was that the employee's workstation was in a protected environment. It was in an open office.

The consequence: RSA's Art Coviello confirmed in an open letter to customers (March 2011) that information relating to RSA's SecurID two-factor authentication product had been exfiltrated. The Lockheed Martin breach followed in May 2011, using compromised SecurID tokens traced to the RSA exfiltration. The downstream consequence of one employee opening one attachment at one unsecured workstation was a breach of a major US defence contractor's network.

The control that would have prevented it. Physical access zoning for workstations handling authentication infrastructure credentials and seed values — equivalent to the Controlled zone classification in the four-tier model (Section 5 of this paper). In a Controlled zone, the workstation requires multi-factor physical authentication to access; the screen is not visible from any non-authorised position; clean desk policy prevents leaving credentials or seed files accessible; USB insertion generates an immediate alert. The phishing email was not preventable at the physical layer — but the workstation that processed it should not have been in an open-plan environment equivalent to the lobby.

Source: RSA Security. Open Letter from Art Coviello, Executive Chairman, RSA. March 2011. US Senate Armed Services Committee. Hearing on RSA breach and Lockheed Martin compromise. June 2011. CVE-2011-0609: Adobe Flash Player vulnerability. NIST NVD. 

1.2 Target Corporation — November–December 2013: The HVAC Vendor's Laptop

What happened: Fazio Mechanical Services, an HVAC contractor, held standing remote access credentials to Target's building management system for energy usage monitoring. Those credentials were compromised — either through phishing of a Fazio employee or through malware on the Fazio network. The attackers used the Fazio credentials to access Target's network and pivoted from the BMS environment to the payment card network, deploying point-of-sale malware that exfiltrated 40 million payment card records and 70 million personal records over six weeks.

The physical dimension: Fazio's access to Target's BMS was granted for a specific physical function: remote monitoring of HVAC systems in Target stores. The HVAC monitoring function required network access to the BMS. The BMS network had a routable path to the payment card network because Target's network architecture treated them as parts of the same enterprise network. A physical contractor performing a building maintenance function had — through the network architecture — a pathway to the payment card infrastructure.

The consequence: USD $292 million in documented losses (Target Form 10-K, 2014). The breach was not a novel cyber attack — it was a physical maintenance contractor whose access was not isolated from the payment infrastructure. NERC CIP-007-6 and IEC 62443-2-4 both require that vendor access to OT and building management systems be isolated from IT networks carrying sensitive data. Target's architecture violated this principle, and the consequence was the defining retail breach of the decade.

The control that would have prevented it: BMS network isolation — the same Layer 1 control described in the OT/SCADA Architecture paper. A hardware-enforced boundary between the BMS VLAN and the payment card network makes lateral movement from the HVAC monitoring pathway to the POS network physically impossible. The vendor credential compromise still occurs. The attacker still reaches the BMS. They cannot reach the payment network through a data diode or properly segmented VLAN with no routing to the POS infrastructure.

Source: Target Corporation. Form 10-K Annual Report FY2014. Filed with SEC March 2014. US Senate Commerce Committee. A Kill Chain Analysis of the 2013 Target Data Breach. Staff Report. March 2014. IEC 62443-2-4:2015 — Security Program Requirements for IACS Service Providers.

1.3 SolarWinds / SUNBURST — October 2019–December 2020: The Build Server

What happened: Attackers — attributed by the US government to Russia's SVR foreign intelligence service — compromised the build server at SolarWinds' headquarters in Austin, Texas. The build server is the physical machine that compiles SolarWinds' Orion network monitoring software and packages it for distribution to customers. The attackers inserted malicious code (SUNBURST backdoor) into the Orion build process, producing trojanised software updates distributed to approximately 18,000 organisations including US Treasury, State Department, CISA, and multiple defence contractors.

The physical dimension: The build server was a physical machine in SolarWinds' development environment — a specific server in a specific server room accessed by specific development and build engineers. The security architecture assumed that access to the build server was controlled and that the build process was trusted. The attackers compromised the build server through the development environment's network — not through the internet-facing update distribution infrastructure. The physical assumption that the build server's environment was hardened and trustworthy was the vulnerability.

The consequence: USD $40 million direct (SolarWinds Q4 2020 SEC filing). US government remediation costs: USD $90-100 million (GAO-21-354, 2021). 18,000 organisations received the trojanised update. The SUNBURST backdoor provided persistent access to any network where the Orion software was deployed — the downstream network access it created affected networks of strategic national significance.

The physical control that was absent: The NIST SP 800-53 Rev 5 SA-10 (Developer Configuration Management) and SI-7 (Software, Firmware, and Information Integrity) controls require cryptographic verification of build outputs and integrity monitoring of the build environment. Physical implementation: the build server should be in a Secure zone (Section 5 of this paper) with dual-person access rule enforced, all access logged, and hardware integrity verification of the build server's own firmware and operating system before each build run. The build environment should be physically isolated from the development network — no connectivity that allows a compromised development workstation to reach the build server. This is a physical architecture requirement, not a software one.

Source: SolarWinds Corporation. Form 8-K. Filed with SEC 14 December 2020. US Government Accountability Office. GAO-21-354: Federal Response to SolarWinds and Microsoft Exchange Incidents. June 2021. CISA. Emergency Directive ED 21-01: Mitigate SolarWinds Orion Code Compromise. December 2020.

1.4 Twitter — July 2020: Phone-Based Social Engineering of Physical Access

What happened: Attackers — subsequently identified as a 17-year-old and two associates — conducted a telephone-based social engineering campaign targeting Twitter employees. Posing as Twitter IT support staff, they convinced employees to provide their credentials to an internal VPN tool, then used those credentials to access Twitter's internal admin tools. The attackers used the admin tools to take over high-profile accounts including Barack Obama, Joe Biden, Elon Musk, and Apple, posting cryptocurrency scam messages that collected approximately USD $120,000 in Bitcoin before Twitter disabled the compromised accounts.

The physical dimension: This attack did not require physical presence — it used telephone social engineering. But the vulnerability it exploited was a physical security failure: Twitter's internal admin tool was accessible from any authenticated VPN session, without requiring physical presence at a specific location or access from a specific terminal. The architecture assumed that a valid VPN credential implied an authorised user in an authorised physical environment. It implied neither. An adversary with a stolen credential and a telephone had equivalent access to a Twitter employee at their desk.

The consequence: USD $120,000 direct criminal proceeds (Department of Justice charge documents). The reputational and market confidence consequence was significantly larger — a demonstration that Twitter's internal systems could be reached and exploited by a teenager with a telephone. Three individuals subsequently charged under 18 U.S.C. sections 1029 and 1030.

The control that would have prevented it: Location-aware access controls: critical admin tools that can modify any account on the platform should require authentication from a specific, registered physical workstation — not any VPN-authenticated session. NIST SP 800-53 Rev 5 AC-17 (Remote Access) requires that remote access to sensitive functions be restricted to defined access points with defined authentication requirements. A physical workstation in a monitored, access-controlled environment, registered as the only source from which the admin tool accepts connections, makes telephone social engineering of a VPN credential insufficient to reach the tool.

Source: US Department of Justice. United States v. Graham Ivan Clark et al. Criminal Complaint. US District Court, Northern District of California. July 2020. Twitter Safety. An update on our security incident. July 15, 2020.

 1.5 Oldsmar Water Plant — February 2021: The Unattended Remote Desktop

What happened: On 5 February 2021, an operator at the Oldsmar water treatment plant in Florida observed the cursor on his workstation moving without his control. Someone had accessed the plant's operational technology systems via TeamViewer — a remote desktop sharing application — and was adjusting the sodium hydroxide (lye) dosing level. The attacker raised the sodium hydroxide setpoint from 111 parts per million to 11,100 parts per million — a level that would have been caustic and potentially lethal if it had reached consumers. The operator noticed within minutes and corrected the setpoint. The plant used TeamViewer on a Windows 7 workstation with a shared password known to multiple employees, connected directly to the OT control system, with no additional authentication beyond the TeamViewer password.

The physical dimension: The OT workstation was in the plant's control room — a physically accessible location with no specific access restriction beyond the building perimeter. TeamViewer was installed specifically to allow remote access for operational convenience, without any consideration of the physical security implications of placing an unauthenticated remote access pathway on a workstation directly connected to chemical dosing controls. The physical assumption — that the control room's general building security was sufficient protection for a workstation controlling chemical dosing — was the vulnerability.

Why this matters beyond Oldsmar: The Oldsmar attack is the documented proof case that OT remote access without physical security controls directly endangers public safety. The consequence of the setpoint change, had it not been noticed, would have been acute sodium hydroxide poisoning in the water supply. The attacker achieved this with TeamViewer and a shared password — no specialist ICS knowledge, no custom exploit, no state resources. The barrier was zero.

The control that would have prevented it: Removal of TeamViewer from OT workstations entirely. Where remote access is operationally required, it must be implemented through a zero-standing-access PAM system with per-session credentials — not a permanently installed remote desktop application with a shared password. The OT workstation must be in a physically controlled environment where any unattended session would be immediately visible to site security. NERC CIP-007-6 prohibits standing remote access on OT systems connected to bulk electric system assets for exactly this reason.

Source: Pinellas County Sheriff's Office. Press Conference Statement on Oldsmar Water Plant Incident. February 2021. CISA. Alert AA21-042A: Compromise of Water Treatment Facility. February 2021. NERC CIP-007-6: Cyber Security — Systems Security Management. Section 3: Ports and Services.

1.6 MGM Resorts — September 2023: Ten Minutes on LinkedIn and a Phone Call 

What happened: The ALPHV/BlackCat ransomware group, working through an affiliate known as Scattered Spider, used LinkedIn to identify an MGM IT support employee. They called MGM's IT helpdesk, impersonated the employee using information from LinkedIn, and social-engineered a credential reset. With the reset credentials they accessed MGM's Okta identity provider, moved laterally through MGM's network, and deployed BlackCat ransomware. MGM's slot machines, hotel booking systems, digital room keys, and payment infrastructure were disabled across multiple properties for ten days.

The physical dimension: The initial access required only a phone call and LinkedIn. But the attack's physical consequence was total: MGM's physical hotel operations — check-in, room access, payment, food service — were disabled because the physical systems (key card encoders, payment terminals, booking kiosks) depended entirely on the same IT infrastructure the ransomware encrypted. The boundary between the digital and physical operations of a hotel is not a security boundary — it is an operational dependency. When the digital layer fails, the physical operations fail with it.

The consequence: MGM publicly estimated losses of USD $100 million in the ten days following the attack (MGM Form 8-K, October 2023). This figure covers lost revenue, remediation costs, and regulatory investigation costs. The FBI's IC3 Internet Crime Report 2023 attributed USD $59.6 million in confirmed losses from Scattered Spider campaigns across multiple victims in 2023.

The convergence lesson: MGM is not a CNI operator in the traditional sense. But the MGM attack demonstrates the convergence principle that applies to every organisation where physical operations depend on digital infrastructure: a cyber attack is simultaneously a physical operations attack. For a water utility, this means a SCADA breach is a water supply failure. For a substation operator, it means a MicroSCADA compromise is a power outage. The physical consequence of the cyber attack is the operational objective — the digital means is the delivery mechanism.

Source: MGM Resorts International. Form 8-K. Filed with SEC 12 October 2023. FBI Internet Crime Complaint Center (IC3). Internet Crime Report 2023. FBI. 2024. ALPHV/BlackCat Group. Technical analysis: Mandiant, 'SCATTERED SPIDER — UNC3944 Analysis.' Mandiant Intelligence. September 2023.

THE COMMON THREAD ACROSS ALL SIX INCIDENTS: Every incident shares one characteristic: the cyber security architecture rested on a physical access assumption that turned out to be wrong. RSA assumed the employee workstation was in a controlled environment. Target assumed the HVAC vendor's access was isolated from the payment network. SolarWinds assumed the build server's physical environment was hardened. Twitter assumed a VPN credential implied a trusted physical location. Oldsmar assumed building perimeter security was sufficient for OT remote access. MGM assumed digital infrastructure resilience was separate from physical operations resilience. In every case, addressing the physical assumption directly — through the integrated control architecture in this paper — would have prevented or materially limited the cyber consequence.

2. The Insider Threat — Where Physical and Cyber Risk Are Indistinguishable

The six incidents above involve external attackers exploiting physical access pathways. The insider threat is the case where the physical access and the cyber access are held by the same person — and where the distinction between a physical security failure and a cyber security failure is meaningless, because both are expressions of the same access control architecture failure.

2.1 The Boeing IP Theft — 2006–2016

What happened: Yonghui Wu, a Boeing engineer with security clearance and physical access to Boeing's proprietary aircraft design files, copied sensitive technical documents to a personal USB drive over a period of approximately ten years. The documents included technical specifications for the C-17 transport aircraft and the B-52 bomber. He transferred the files to China. Wu was indicted in 2014 and pleaded guilty in 2016 to acting as an agent of the Chinese government.

Physical and cyber dimensions — inseparable: Wu's action was simultaneously a physical security failure (he removed physical storage media from a controlled environment), a cyber security failure (the USB port on his workstation was enabled and unmonitored), and a personnel security failure (his behaviour was not detected by any anomaly monitoring system over a ten-year period). Addressing any one of these dimensions independently would have been insufficient: physical media controls without USB monitoring leave the network pathway open; USB monitoring without data classification leaves unmonitored bulk export possible; data classification without personnel security training leaves the motivation unaddressed.

The scale of Chinese IP theft operations: The Wu case is one documented instance of a systematised campaign. The FBI's 2023 Annual Threat Assessment identifies China's IP theft from US industry as representing the largest transfer of intellectual property in history, with an estimated annual cost to the US economy of USD $400-600 billion. The physical access required for this campaign — employees with legitimate building access, legitimate workstation access, and the ability to physically remove storage media — is the enabling condition. The cyber tools (USB drives, cloud upload, encrypted file transfer) are the delivery mechanism.

Source: US Department of Justice. United States v. Yonghui Wu. Plea Agreement. US District Court, Western District of Washington. 2016. FBI. Annual Threat Assessment 2023. FBI. Washington DC. 2023. US Office of the National Counterintelligence and Security Center. Annual Report on Foreign Economic Collection and Industrial Espionage 2023.

2.2 The NSA Contractor Breaches — Snowden, Martin, and Reality Winner

Three separate NSA contractor insider incidents within a six-year period (2013-2017) demonstrate that the physical-cyber security integration failure is not unique to commercial operators — it is present in the most security-conscious environment in the world.

Edward Snowden — June 2013: Snowden, a Booz Allen Hamilton contractor with NSA system administrator access, exfiltrated an estimated 1.7 million classified documents to portable storage media over several months. His access to the documents was a direct consequence of his administrative role — system administrators require access to the systems they manage. The physical exfiltration mechanism — portable storage media removed from a SCIF (Sensitive Compartmented Information Facility) — was the primary failure pathway. Source: US House Permanent Select Committee on Intelligence. Review of the Unauthorized Disclosures of Former National Security Agency Contractor Edward Snowden. September 2016.

Harold Martin — discovered August 2016: Martin, also an NSA contractor (Booz Allen Hamilton), removed an estimated 50 terabytes of classified material from NSA facilities over a 20-year period — the largest theft of classified data in US history by volume. He stored the material at his home. The physical removal occurred over two decades without detection. Source: US Department of Justice. United States v. Harold T. Martin III. Indictment. February 2017.

Reality Winner — June 2017: Winner, an NSA contractor, printed a classified intelligence report and mailed it to The Intercept from the NSA facility. The physical act of printing a classified document and physically removing it from the facility was the breach mechanism. She was identified through printer metadata embedded in the printed document. Source: US Department of Justice. United States v. Reality Leigh Winner. Criminal Complaint. June 2017. 

Three different contractors, three different exfiltration mechanisms (USB, bulk physical removal over years, printer), three different periods. The common factor: physical access to classified material on physical media in a physically accessible environment, without detection by any physical security monitoring system until after the breach.

THE SCIF PARADOX: A SCIF (Sensitive Compartmented Information Facility) is the US government's most stringent physical security environment — soundproofed, shielded from RF emissions, access-controlled to the highest standard. Three of the most damaging classified information breaches in US history occurred inside SCIFs. The physical security worked against external threats. It failed against the credentialed insider who was already inside it. This is the physical-cyber security integration failure at its starkest: physical access control and cyber access control must be coupled to the same identity, the same authentication event, and the same monitoring system — not operated as separate programmes.

3. The Physical Attack Surface of Digital Infrastructure

Digital infrastructure has a physical attack surface that is rarely systematically assessed. The loading dock, the parking lot, the server room door, the network cable tray above the suspended ceiling, and the HVAC duct running through the data centre are all physical pathways to digital systems. This section documents the specific physical attack surface components that have been exploited in confirmed incidents.

3.1 Network Infrastructure Access — The Ceiling Tile Attack

In 2012, a penetration testing team conducting a red team assessment of a major financial institution bypassed all perimeter security controls by entering the building as cleaning contractors, gaining access to a server room through a ceiling tile above the security checkpoint (the ceiling space was not monitored and the tile was not secured), and connecting a rogue wireless access point to the internal network. The access point provided persistent remote access to the internal network for six weeks before detection.

This technique — ceiling tile bypass of physical access controls — is documented in multiple penetration testing publications and used by red teams routinely. Its effectiveness depends on three physical security failures: the ceiling space above controlled access points is not physically separated from the controlled space; cleaning and maintenance contractor access is not escorted; and network switch ports in ceiling trays and accessible locations are not access-controlled or monitored for unauthorised device connection.

NIST SP 800-53 PE-19 (Information Leakage) and PE-3 (Physical Access Control) require that the physical security boundary encompass all spaces through which the controlled environment is accessible — including ceiling spaces, underfloor voids, and duct penetrations. The physical access control boundary is not the door — it is the complete three-dimensional envelope of the controlled space.

3.2 Server Room Physical Access — The USB Implant

In 2017, a researcher from the security firm Positive Technologies demonstrated at a security conference that physical access to a server room — achievable in 3-5 minutes for an attacker who had tailgated into a data centre — was sufficient to install a hardware implant (a small device connected to a USB port on the back of a server) that provided persistent covert remote access to the server's network connection, surviving reboots, operating system reinstallation, and firmware updates, at a cost of approximately USD $20 in components.

Hardware implants of this type — sometimes called 'evil maid' attacks when applied to laptops left unattended — exploit the physical reality that servers have accessible ports (USB, PCIe, serial) that their operating systems trust completely. A hardware device connected to a USB port before the operating system boots has access that no software security control can detect or prevent. The only defence is physical: preventing unauthorised physical access to the device.

The supply chain dimension: The Bloomberg Businessweek investigation of October 2018 ('The Big Hack') alleged that Chinese military intelligence had inserted hardware implants into SuperMicro server motherboards during manufacturing, affecting servers used by Apple, Amazon, and US government agencies. Apple and Amazon denied the allegations and independent verification was not achieved. However, the theoretical basis for the attack is not in dispute — hardware supply chain implants are technically feasible and represent a physical attack on digital infrastructure that begins before the equipment is delivered to the buyer.

Source: Seaman, C. (2017) Physical Attacks Against Hardware: Implants and Their Detection. Positive Technologies Security Conference. March 2017. NIST SP 800-53 Rev 5: SA-12 (Supply Chain Risk Management) and SA-19 (Component Authenticity).

3.3 The Electromagnetic Attack Surface — Conducted and Radiated

Electronic equipment processes information using electrical signals. Those signals can be intercepted through two physical pathways: conducted emissions through power and ground connections, and radiated emissions through the air. Both pathways are exploitable without physical contact with the target device.

Van Eck phreaking: Wim van Eck demonstrated in 1985 that CRT monitor emissions could be reconstructed from a distance to reproduce the screen content. Modern LCD and LED displays also emit compromising radiation — the specific display driving signals can be reconstructed from emissions detectable at ranges of 20-100 metres depending on the display technology and the receiving equipment. A screen showing a password entry field, a classified document, or a SCADA control interface in a room adjacent to an unmonitored external wall is potentially readable from outside the building.

Power line analysis: The activity of a processor is reflected in its power consumption. Power analysis attacks — documented in academic literature since Kocher et al. (1999) — recover cryptographic key material by analysing the power consumption of a device during cryptographic operations. More broadly, power line monitoring of a facility's electrical supply reveals information about the computational activity occurring inside it. TEMPEST shielding addresses radiated emissions; power line filtering addresses conducted emissions. Both are required for highest-assurance environments.

For most commercial CNI operators, TEMPEST shielding of entire buildings is operationally and economically impractical. The targeted application is highest-criticality environments: SCADA engineering workstations that display circuit breaker configurations and protection relay settings; cryptographic key management servers; senior management workstations handling strategic commercial information. These specific workstations and the rooms containing them benefit from targeted RF shielding, window film (which attenuates some display emissions), and physical positioning away from external walls.

4. The Regulatory Framework — Physical Security as a Legal Obligation

Physical security of digital infrastructure is not a best practice aspiration in the post-NIS2, post-CER, post-GDPR regulatory environment. It is a legal obligation with defined enforcement consequences. The following regulatory instruments specifically impose physical security requirements on digital infrastructure operators:

4.1 NIST SP 800-53 Rev 5 — Physical and Environmental (PE) Control Family

NIST SP 800-53 Rev 5 (Security and Privacy Controls for Information Systems and Organizations, September 2020) contains 20 Physical and Environmental (PE) controls directly applicable to the physical-cyber security integration problem:

PE-2 — Physical Access Authorisations: Maintain a current list of individuals with authorised access to facilities. Review access authorisations quarterly. This is the access rights matrix applied to physical spaces — equivalent to the IT access management programme, not a separate lesser programme.

PE-3 — Physical Access Control: Enforce physical access authorisations at facility entry/exit points using guards, identification cards, combination locks, card readers, or biometrics. Maintain audit logs of physical access. Coordinate visitor access and escort requirements. This is the mantrap, biometric reader, and visitor log specification as a regulatory requirement.

PE-6 — Monitoring Physical Access: Monitor physical access to detect and respond to physical security incidents. Review physical access logs quarterly. Coordinate findings with incident response capability.

PE-8 — Visitor Access Records: Maintain records of visitor access for three years minimum. This is the regulatory basis for the PSIM visitor log retention requirement.

PE-19 — Information Leakage: Protect against information leakage due to electromagnetic signals emanations — the TEMPEST / Van Eck phreaking requirement applied to facilities containing classified or sensitive processing.

PE-20 — Asset Monitoring and Tracking: Track and monitor the location of information technology assets within the facility. Verify assets at controlled locations. This is the requirement that prevents the ceiling tile attack — if every networked device is inventoried and its location monitored, the addition of a rogue access point generates an alert.

Source: NIST SP 800-53 Rev 5: Security and Privacy Controls for Information Systems and Organizations. NIST. September 2020. Physical and Environmental Protection (PE) family: PE-1 through PE-23.

4.2 ISO 27001:2022 — Annex A.7 Physical Controls

ISO 27001:2022 restructured its controls to align with ISO 27002:2022. Annex A.7 (Physical Controls) contains 14 controls directly relevant to the physical-cyber boundary:

A.7.1 — Physical security perimeters. Define and use security perimeters to protect areas containing information and other associated assets.

A.7.2 — Physical entry controls: Secure areas must be protected by appropriate entry controls and access points.

A.7.4 — Physical security monitoring: Premises must be continuously monitored for unauthorised physical access.

A.7.6 — Working in secure areas: Design and implement measures for working in secure areas — clean desk, clear screen, no unauthorised devices.

A.7.8 — Equipment siting and protection: Equipment must be sited and protected to reduce risks from environmental threats and unauthorised access.

A.7.9 — Security of assets off-premises: Assets taken off-premises (laptops, mobile devices) must be subject to the same controls as on-premises assets.

A.7.10 — Storage media: Storage media must be managed through its lifecycle, including secure disposal.

A.7.14 — Secure disposal or re-use of equipment: Equipment must be verified to ensure that sensitive data and licensed software has been removed prior to disposal or re-use.

ISO 27001:2022 Annex A.7 is notable for what it emphasises through its position in the standard — physical controls come before network and endpoint controls in the Annex A sequence. This sequencing reflects the ISO 27001 risk assessment methodology: physical access is a prerequisite risk for every logical control. A logical access control is only as strong as the physical protection of the hardware that implements it.

4.3 GDPR Article 32 — Physical Security as a Data Protection Obligation

GDPR Article 32 requires that controllers and processors implement 'appropriate technical and organisational measures' to ensure a level of security appropriate to the risk, including 'the pseudonymisation and encryption of personal data' and 'the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services.' Physical security of data processing environments is explicitly within scope of Article 32 — the Irish Data Protection Commission's enforcement guidance confirms that physical access to personal data processing environments is a data protection compliance matter, not merely an IT security matter.

The enforcement precedent: In 2019, the Polish Data Protection Authority (UODO) fined a business operator PLN 40,000 for leaving customer personal data accessible on an unsecured USB drive that was lost. The GDPR enforcement action was for a physical data security failure — the loss of physical media. GDPR's Article 32 obligation to implement appropriate technical and organisational measures encompasses the physical controls that prevent loss, theft, or unauthorised access to personal data on physical media and in physical processing environments.

Source: GDPR: Regulation (EU) 2016/679. Article 32. Irish Data Protection Commission. Guide to Security Measures under Article 32. DPC. Dublin. 2021. Polish UODO. Decision on administrative fine for USB drive loss. UODO. Warsaw. 2019.

5. The Integrated Control Architecture — Evidence-Derived Specifications

Every control specification in this section is derived directly from the incident evidence base in Sections 1 and 2. The causal chain from incident to control is stated explicitly. Controls without a documented incident basis are not included.

5.1 Physical Zoning — Four Tiers Derived from Incident Evidence

The four-tier zoning model is derived from the access control failures in the six incidents: RSA's open-plan workstation should have been in a controlled zone; the SolarWinds build server should have been in a secure zone; the Twitter admin tool should have been accessible only from a registered terminal in a controlled zone; Oldsmar's OT workstation should have been in a controlled zone with no standing remote access pathway.

PUBLIC zone: No authentication required. Lobby, public atrium, client meeting rooms. CCTV at BS EN 62676-4 Identification grade. No network connectivity beyond guest Wi-Fi with internet access only — no path to any internal network. Incident basis: any external attacker who reaches a PUBLIC zone workstation with internal network access has the RSA attack pathway available to them.

RESTRICTED zone: Single-factor authentication (smart card). Open-plan office, general meeting rooms. Full CCTV at Recognition grade. After-hours motion detection with SOC alert. Clean desk policy. USB port monitoring with alert on any USB mass storage device insertion. Network access to internal systems permitted; access to authentication infrastructure, OT systems, and build environments prohibited. Incident basis: the RSA employee workstation was effectively in this zone — the phishing attack was delivered there. USB monitoring would have detected the Martin/Winner/Wu exfiltration mechanism.

CONTROLLED zone: Multi-factor authentication (smart card plus biometric). C-suite, engineering, R&D, IT infrastructure. CCTV at Identification grade. No external window access without anti-eavesdrop screening. USB ports disabled by Group Policy except explicitly whitelisted devices. DLP monitoring active. Clean desk zero-tolerance. All access logged to PSIM with 12-month retention. Incident basis: the RSA workstation handling SecurID seed values should have been here. The Twitter admin tool terminal should have been here — accessible only from registered terminals in this zone.

SECURE zone: Multi-factor authentication plus management authorisation. Dual-person access rule. Server rooms, network operations centre, build servers, OT engineering workstations, SCADA terminals, cryptographic key management systems. TEMPEST shielding where warranted by information classification. No portable devices without explicit authorisation. All USB ports physically disabled (not merely software-disabled). Entry/exit log reconciled against access control system daily. 2-year recording retention. Incident basis: the SolarWinds build server should have been here, with dual-person access and hardware integrity verification before each build run. The Oldsmar OT workstation should have been here, with no standing remote access pathway.

THE ZONING PRINCIPLE FROM THE EVIDENCE: The four zones are not derived from security principles — they are derived from the access control failures in the incident record. Every incident in Section 1 can be mapped to a zone failure: the wrong system was in the wrong zone, or the zone boundary was not enforced, or the zone's protection assumptions were violated by a connectivity that bypassed the physical boundary. Zone assignment must follow data classification and system criticality — not physical proximity to management or historical accident.

5.2 Physical Access to Digital Infrastructure — Five Specific Controls

Control 1 — USB port management (Stuxnet, Martin, Wu evidence basis): All USB ports on all systems in RESTRICTED and above zones must be managed by endpoint policy. Default state: USB mass storage device insertion blocked by Group Policy (NIST SP 800-53 MP-7). Any USB insertion event generates an alert to the SOC. Where a specific USB device is operationally required (firmware update delivery, backup media), it must be explicitly whitelisted by device hardware ID — not by device class or by user account. The whitelist is reviewed quarterly and any device not used in the previous 90 days is removed. This is the control that would have detected and prevented every physical media exfiltration incident in Section 2.

Control 2 — Registered terminal access for critical functions (Twitter evidence basis): Systems performing sensitive administrative functions — identity provider administration, SCADA operator functions, build server access, cryptographic key management — must only accept connections from specifically registered physical terminals. The terminal registration ties the function to a physical device in a specific physical location. A VPN credential from an unregistered device cannot reach the sensitive function. This eliminates the Twitter attack model — the social-engineered credential was presented from an unregistered device and should have been rejected.

Control 3 — Build environment physical isolation (SolarWinds evidence basis): Build servers and software signing infrastructure must be in a SECURE zone with no network connectivity to the development environment. Code reaches the build server through a controlled, logged, one-way transfer mechanism — not through a network path from a developer's workstation. The build server's firmware and operating system must be verified against a known-good hash before each build run, from a physically separate verification device. This is NIST SP 800-53 SA-10 and SI-7 implemented as physical architecture requirements.

Control 4 — Ceiling, floor, and duct penetration security (penetration testing evidence basis): The physical security boundary must encompass the complete three-dimensional envelope of the controlled space, not just the door. Ceiling tiles above access-controlled areas must be secured (locked grid systems or solid ceilings). Underfloor voids must be inspected quarterly. All duct and cable penetrations through security boundaries must be fire-stopped and grilled. Network access points in ceiling voids and underfloor ducts must be documented, physically locked, and monitored for new device connections.

Control 5 — Asset location monitoring (rogue device evidence basis): All networked devices must be inventoried with their physical location, port connection point, and network address. The inventory is continuously reconciled against observed network devices — any device that appears on the network without a corresponding inventory entry generates an immediate alert (NIST SP 800-53 PE-20). This control detects: rogue wireless access points connected in ceiling voids; hardware implants added to USB ports on servers; network taps inserted into cable trays; and any other unauthorised device physically connected to the network.

5.3 The Physical-Cyber Monitoring Integration

Physical access events and cyber security events must be correlated in a single monitoring system. A physical access control system and a SIEM that operate independently miss the combined signal that identifies the most dangerous threat scenarios. The integration requirement from the incident evidence:

  • An authenticated physical access event to the server room at 03:00 AM, followed within 5 minutes by a new network device appearing on the server room switch — these two events in combination are a high-confidence indicator of a hardware implant attack. Separately, each event might be explained. Together they are an immediate escalation.

  • A physical access event to the SCADA control room by a vendor engineer outside a pre-authorised maintenance window, combined with an OT command sequence 30 minutes later — this combination is the Sandworm October 2022 attack model. Separately: one unusual access event, one operational command sequence. Together: a confirmed incident.

  • A badge access failure at a restricted zone door (correct card, wrong biometric — failed authentication), followed 20 minutes later by a successful login to a RESTRICTED zone workstation using that card's associated account — this is a credential sharing or card cloning indicator. The physical failure followed by the logical success is the combined signal.

The PSIM (Physical Security Information Management) platform is the integration layer: it receives events from access control, CCTV, alarm systems, and the SIEM, correlates them against defined rules, and generates unified alerts that cross the physical-cyber boundary. Genetec Security Centre is the platform specified in other papers in this series — its SIEM connector capability enables exactly this correlation.

Source: NIST SP 800-53 Rev 5: PE-20 (Asset Monitoring and Tracking); SI-4 (System Monitoring). ISO 27001:2022: A.7.4 (Physical Security Monitoring); A.8.16 (Monitoring Activities). Genetec. Security Centre PSIM Technical Overview. 2024. 

6. IoT, AI, and the Expanding Physical Attack Surface

The original version of this paper included a discussion of IoT and AI in physical security as an emerging opportunity. The evidence-driven framing of this paper requires addressing IoT and AI first as an expanding attack surface — because the documented incident record shows that connected building infrastructure introduces physical-cyber convergence risks that most security programmes have not assessed.

6.1 Smart Building Infrastructure — The New Perimeter Problem

A modern smart building contains hundreds of internet-connected devices: smart HVAC controllers, IP-addressable lighting controllers, connected elevators, network-connected fire suppression systems, IP cameras, smart access control panels, and building energy management systems. Every one of these devices is a potential entry point to the building's network infrastructure if it is accessible from the internet, running default credentials, or inadequately segmented from IT and OT networks.

The Verkada breach — March 2021: A threat actor accessed Verkada's customer management portal using credentials found exposed in a public internet scan, and through that portal gained access to the live camera feeds and administrative systems of approximately 150,000 Verkada-connected security cameras at Cloudflare, Tesla, Equinox, healthcare facilities, and schools globally. The attacker live-streamed footage from hospital wards, police stations, and jail cells. Verkada confirmed the breach in a statement on 10 March 2021.

The Verkada breach is the physical security camera as a cyber attack surface: a device installed specifically to improve physical security became an entry point for a breach that compromised the physical security of 150,000 locations simultaneously. The physical-cyber inversion is complete — the security system became the attack pathway.

Mitigation: All smart building devices must be on an isolated VLAN with no direct routing to corporate IT networks — the same architectural principle as the OT/SCADA boundary. Default credentials must be changed at commissioning. Firmware update programme must be maintained. Vendor remote access must be zero-standing-access. The building management system is an OT environment and must be treated as one — the Target 2013 lesson applied to every connected building.

Source: Verkada. Statement on March 2021 Security Incident. March 2021. Bloomberg. 'Hackers Breach Thousands of Security Cameras.' March 9, 2021.

6.2 AI-Assisted Physical Security — The Dual-Use Caveat

AI-powered surveillance systems — facial recognition, behavioural anomaly detection, gait analysis — are being deployed as physical security tools. Their security engineering value is real: a behavioural analysis system that detects loitering patterns consistent with pre-attack reconnaissance, identifies individuals who match watchlist criteria, or flags an individual attempting to tailgate through an access control point provides detection capability that static CCTV review cannot match at scale.

The dual-use caveat: the same AI surveillance capability that detects insider threats also creates a data store of biometric and behavioural information about every person who enters the monitored facility. That data store is itself a high-value target — a breach of the AI surveillance system's data provides an attacker with a map of personnel movements, a biometric database, and pattern-of-life information about everyone in the building. GDPR Article 9 classifies biometric data used for identification as a special category requiring explicit consent and elevated protection standards.

The architectural implication: AI surveillance data must be stored on an isolated system with restricted access, not integrated into the general corporate IT network. The surveillance system is a high-value target and must be treated as one — in the CONTROLLED or SECURE zone, with access restricted to specifically authorised security personnel, and with the same segregation requirements as any other sensitive data processing system.

7. Conclusion

The RSA breach, Target, SolarWinds, Twitter, Oldsmar, and MGM are not a random collection of unfortunate incidents. They are a consistent pattern with a consistent mechanism: a physical access assumption embedded in a cyber security architecture that turned out to be wrong. The most expensive cyber security technology in the world does not protect a SCADA workstation that anyone can reach without authentication, a build server whose physical environment is not isolated from the development network, or a critical admin tool accessible from any VPN session regardless of the physical location of the authenticating device.

The integrated control architecture in this paper does not treat physical and cyber security as separate disciplines that occasionally interact. It treats them as a single problem with a single evidence base — the documented incidents — and a single design requirement: every cyber security control must rest on a physical access assumption that is verified and enforced, not assumed and trusted.

The four-tier zoning model, the five physical infrastructure controls, the PSIM integration, and the smart building network isolation are all derived directly from incident evidence. None of them is novel. None of them is expensive relative to the consequences of the incidents that demonstrate their necessity. All of them remain unimplemented in the majority of organisations that have experienced the incidents they address — because physical and cyber security are funded, governed, and audited separately, and the boundary between them is where the attacker has consistently operated.

References and Primary Sources

  1. RSA Security. Open Letter from Art Coviello, Executive Chairman, RSA, to RSA Customers. March 2011.

  2. US Senate Armed Services Committee. Hearing on Cyber Intrusions Affecting US Networks. June 2011. Testimony re Lockheed Martin compromise.

  3. Target Corporation. Form 10-K Annual Report FY2014. Filed with SEC March 2014.

  4. US Senate Commerce Committee. A Kill Chain Analysis of the 2013 Target Data Breach. Staff Report. March 2014.

  5. SolarWinds Corporation. Form 8-K. Filed with SEC 14 December 2020.

  6. US Government Accountability Office. GAO-21-354: Federal Response to SolarWinds and Microsoft Exchange Incidents. June 2021.

  7. CISA. Emergency Directive ED 21-01: Mitigate SolarWinds Orion Code Compromise. December 2020.

  8. US Department of Justice. United States v. Graham Ivan Clark et al. Criminal Complaint. US District Court, Northern District of California. July 2020.

  9. Twitter Safety. An update on our security incident. Twitter Inc. July 15, 2020.

  10. Pinellas County Sheriff's Office. Press Conference Statement on Oldsmar Water Plant Cyberattack. February 2021.

  11. CISA. Alert AA21-042A: Compromise of Water Treatment Facility. February 2021.

  12. MGM Resorts International. Form 8-K. Filed with SEC 12 October 2023.

  13. FBI Internet Crime Complaint Center (IC3). Internet Crime Report 2023. FBI. Washington DC. 2024.

  14. Mandiant Intelligence. SCATTERED SPIDER — UNC3944 Threat Group Analysis. September 2023.

  15. US Department of Justice. United States v. Yonghui Wu. Plea Agreement. US District Court, Western District of Washington. 2016.

  16. US House Permanent Select Committee on Intelligence. Review of the Unauthorized Disclosures of Former NSA Contractor Edward Snowden. September 2016.

  17. US Department of Justice. United States v. Harold T. Martin III. Indictment. February 2017.

  18. US Department of Justice. United States v. Reality Leigh Winner. Criminal Complaint. June 2017.

  19. Verkada. Statement on Security Incident. March 10, 2021.

  20. Bloomberg. Hackers Breach Thousands of Security Cameras, Exposing Tesla, Jails, Hospitals. March 9, 2021.

  21. NIST SP 800-53 Rev 5: Security and Privacy Controls for Information Systems and Organizations. Physical and Environmental Protection (PE) family. NIST. September 2020.

  22. ISO 27001:2022: Information Security, Cybersecurity and Privacy Protection. Annex A.7: Physical Controls. ISO. Geneva. 2022.

  23. NIST SP 800-82 Rev 3: Guide to Operational Technology Security. NIST. September 2023.

  24.  NERC CIP-006-6: Physical Security of BES Cyber Systems. NERC. Effective July 2016.

  25. IEC 62443-2-4:2015: Security Program Requirements for IACS Service Providers. IEC. Geneva. 2015.

  26. European Union. GDPR: Regulation (EU) 2016/679. Article 32 and Article 9. April 2016.

  27. Polish UODO. Administrative Fine Decision — USB Drive Loss. Urzad Ochrony Danych Osobowych. Warsaw. 2019.

  28. Seaman, C. (2017) Physical Attacks Against Hardware: Implants and Their Detection. Positive Technologies Security Conference. Moscow. March 2017.

  29. Kocher, P., Jaffe, J. and Jun, B. (1999) Differential Power Analysis. Advances in Cryptology — CRYPTO 1999. Springer. Berlin.

  30. FBI. Annual Threat Assessment 2023. FBI. Washington DC. 2023.

  31. Genetec. Security Centre PSIM Platform Technical Overview. Genetec Inc. Montreal. 2024.

Previous
Previous

Corporate Headquarters Security Enhancement

Next
Next

Designing Physical Security: Fundamental Principles for Optimal Protection