The USB Type-C Authentication specification provides a method to authenticate USB products, although with some flaws. As you might wonder — what about other non-USB peripherals? How can we establish trust with them instead of “Trust-by-Default”? That’s how I ended up with Intel’s PCI Express Device Security Enhancement (DRAFT, we will call it PCIe authentication specification, or the specification for short), aiming to bring authentication to PCIe peripherals. This post is my thoughts after reading the draft. Opinions are my own.
Even though the specification targets PCIe authentication, it considers several existing specifications, including USB Type-C Authentication, MCTP, and TCG DICE. As a result, the PCIe authentication specification builds upon the existing framework proposed by USB Type-C Authentication and supports MCTP too. Note that this is a sensible decision considering the incoming of USB4, which is built upon Thunderbolt, which is essentially PCIe.
Compared to USB Type-C Authentication, PCIe authentication specification explicitly considers the measurement of device firmware and distinguishes between PCIe device measurement and PCIe device authentication. For instance, the device measurement could be tampered with during supply chain attacks, while the usage of certificates does not tell anything about the firmware rather than the identity of devices. Consequently, PCIe device authentication combines both PCIe device measurement and certificates (Public Key Infrastructure, PKI) to offer the most robust security guarantee.
Moving forward, I like how the specification clearly defines root-of-trust (RoT), root-of-trust for measurement (RTM), and root-of-trust for reporting (RTR). The spec explicitly mentions that both RTM and RTR should be implemented in pure hardware or immutable firmware that could not be manipulated by mutable firmware — kicking standard device firmware out of the trusted computing base (TCB). This requirement actually addressed one of the issues we found in the USB Type-C Authentication — what if the device firmware does the measurement by itself.
Instead of using ECDSA NIST P-256 and SHA-256 recommended in USB Type-C Authentication, this spec recommends ECDSA NIST P-384 or RSA 3072 and SHA2/3-384 or SHA2/3-512, because “PCIe Device Authentication requires a minimum level of 192-bit security.” Honestly, I do not understand a strong motivation since 128-bit security is secure IMO.
The spec also states explicitly, “Device manufacturers shall employ adequate protections against malicious insider attacks…” I do not recall insider attacks as part of USB Type-C Authentication, and it might be easy to say but hard to achieve, as previous insider attacks happened, e.g., Snowden.
Besides device2host authentication, the spec also envisions host2device authentication and eventually mutual authentication, as we argued in our study for USB Type-C Authentication. The spec further recommends AES-128/256/384 for symmetric crypto (e.g., after DHKE). Maybe AES-128 should not be listed, given that the spec wanted minimum 192-bit security.
The spec also gives three levels of device private key protection. In Level 1, “Device private key is stored in plaintext from inside the Device (e.g., fuze, internal NVRAM)… is accessible to mutable Device component, such as firmware.” Even if we can argue that security is essentially an economic activity and there is a budget for every defense, what is the point of having this L1 protection? This is totally broken. In Level 2, “Device private key is encrypted using a key encryption key (KEK) which is known only to the immutable Device hardware… is accessible only to Device hardware or immutable firmware.” IMO, L2 should be the default one, although I’m a bit skeptical about the secrecy of a secret key known only to hardware. Level 3 mandates the secret generation inside the hardware, e.g., using PUF or DICE.
Same as USB Type-C Authentication, the spec states that “Device private key should be protected against software side-channel attacks as well as against hardware differential power analysis attacks, including all relevant keys and cryptographic algorithms related to the usage of the Device private key.” Presumably, the “software side-channel attacks” might refer to vulnerabilities within immutable firmware, and the only hardware side-channel highlighted here is power analysis. Here are my concerns: how often could we guarantee no side channels in our code? Why can’t we implement those crypto operations using pure hardware? What about EM side channels? Again, I do not think they are bad requirements at all. But I am hoping to see more justification behind all the statements.
USB Type-C Authentication introduces a number of slots supporting different certificate chains, essentially allowing the firmware to create its self-signed certificate chain that might be accepted by some vulnerable policy. The PCIe authentication specification inherits multiple slots and even introduces a new SET_CERTIFICATE command setting new certificate chains into non-zero slots. However, the spec requires that each new certificate chain signs the leaf DeviceCert in slot 0, which is immutable. The sole purpose of this owner-provisioned certificate chain allows the simplification of certificate chain validation, e.g., we don’t need to verify the slot-0 chain every time. Instead, one could provision a self-signed certificate chain and reduce the length of the chain for validation. Unfortunately, there is still a catch. First of all, it is not clear who has the write access to these non-zero slots. The spec uses the term “device owner,” but my gut feeling is that anything might be able to write and overwrite those slots. Second, to accelerate the chain validation, a host machine must remember what slot is used for a given device. This means the provisioned slot number has to be added into the PCIe database maintained by OS. We will have two problems here. What if the target slot was overwritten, which would fail the chain validation for that slot, although slot 0 should still work. What if the slot number in the PCIe database was tampered, e.g., attackers injected their own chains and redirected the slot number in the database to the new slot. The chain validation would succeed while the PCIe device firmware could be compromised if the challenge-response protocol (containing the DEV_IDENTITY and FW_IDENTITY signed by the Device Private Key) was skipped. In the end, no matter what case we are talking about here, a slot-0 chain validation might still be needed. The potential issues of non-zero slot usage might outweigh the benefit of its speed acceleration.
As a draft from Intel tackling PCIe authentication, I think this specification balanced different pros and cons. It has indeed reached a balance between high-level concepts and implementation details. A step forward might be implementing it on a real-world PCIe device and see all the pitfalls and challenges. One of the biggest challenges during implementation might be the boundary between hardware and software, immutable and mutable, and security and performance.