OpenTC: An Open Approach to Trusted Virtualization

Dirk Kuhlmann (European HP Laboratories. Bristol)

Playlists: 'linuxtag06' videos starting here / audio / related events

OpenTC – an Open Approach to Trusted Virtualization Dirk Kuhlmann, Hewlett Packard Laboratories [1] dirk.kuhlmann@hp.com Purpose : Submitted as abstract for a presentation at LinuxTag 2006 in Wiesbaden Date : January 15., 2006 The advent of 'trusted computing' (TC) technology as specified by the Trusted Computing Group has not met much enthusiasm by the Free/Open Source Software (FOSS) and LINUX communities so far. Despite this fact, FOSS based systems have become the preferred vehicle for much of the academic and industrial research on Trusted Computing. In parallel, a lively public discussion between proponents and critics of TC has dealt with the question whether the technology and concepts put forward by the TCG are compatible, complementary or potentially detrimental to the prospects of open software development models and products. Common misconceptions of TC technology are that it implies or favors closed and proprietary systems, reduces options of using arbitrary software, or allows to remotely control what users can and can't do on their computer. It has long been argued, though, that these and similar undesirable effects are by no means unavoidable, if only because the underlying technology is passive and neutral with regard to specific policies. It has also been established that features displayed by TC equipped platforms will almost exclusively be determined by the design of OS and software running on top of it. Given appropriate design, implementation and validation of trusted software components, and using contractual models of negotiating policies, negative effects can be circumvented while improving the system's trust and security properties. This is the intellectual starting point of the EU-supported [2], collaborative OpenTC [3] research and development project that began in November 2006. Combining FOSS and TC Technology OpenTC aims to demonstrate that a combination of TC technology and FOSS has several inherent advantages that are hard to meet by any proprietary approach. Since TC protected software components are protected from inspection during runtime, it is highly desirable that their design documents and source code are available for inspection and validation. Enhanced security at the technical level tends to come at the expense of constraining user options, and the discursive nature of FOSS-development and testing could help to get the balance right. Finally, any attempts to introduce TC technology are likely to fail without the buy-in of its intended users, and openness could prove to be the most important factor for user acceptance. OpenTC sets out to produce building blocks for cooperative security models that can be based on platform properties without having to assume the identifiability, personal accountability and reputation of platform owners or users. For reasons of privacy and efficiency, such models could be preferable to those starting from adversarial behavior.. A policy model based on platform properties, however, requires reliable audit facilities and the trustworthy reporting of platform states to both local users and remote peers. The security architecture put forward by the TCG supplies these functions, including a stepwise verification of platform components with an integral, hardware-assisted auditing facility at its root. In the OpenTC architecture, this will be used as a basic building block Technical Approach: Trusted Virtualization We chose (para-)virtualization as the underlying architecture for a trusted system architecture. In doing this, OpenTC addresses a major concern raised with regard to TC: namely, that trusted computing will dictate to exclusively employ components whose trustworthiness has to be vetted by third parties. Virtualization permits to simultaneously run standard OS distributions and application alongside others that have been locked down for specific purposes. By combining TC and virtualization, it is possible to attest – either to a local user or a remote peer – that the core platform is configured in a way that inhibits privilege escalation or that applications and services are executed in a safe environment shielding them from unauthorized intervention. OpenTC explores this idea for two (para)virtualization approaches: XEN and L4. Both projects that have long since reached out beyond their academic roots by making their systems available under FOSS licenses and are boosted by active developer and user communities. So far, both engines can host multiple customized Linux instances in parallel. The development teams currently are working on integrating hardware-based virtualization support as offered by AMD's and INTEL's new CPU generations. Prototypic results have demonstrated that this will allow to host unmodified OS versions as well [4]. The new CPU features will offers to choose between hard- or software based isolation mechanisms to arrive at required strength of protection and security. In combination with TC technologies, additional features will support establishing trusted paths between software components and I/O devices such as keyboard, mouse and graphics controller and help to counter attacks such as keyboard logging and window spoofing, another long-standing class of problem. The virtualization engines will be initialized in a known-good state by means of boot-chain verification. The TPM and BIOS state is measured and logged into the Trusted Computing module. The BIOS checks and log the contents of the master boot record before loading it into memory. The MBR is part of a modified version of GRUB with a software routine to measure and log the rest of the boot loader code prior to passing control to it. The loader measures and log the components of the virtualization layer. Protected Execution Environments We do not claim originality for the architecture and the policy models implemented, since we are heavily borrowing from research on trusted operating systems that goes back as far as 30 years. The underlying principles – isolation and controlled information flow – are already implemented on some FOSS based systems. Compartmentalization as offered by several security hardened versions of Linux can be used to this end, and it has been demonstrated that such systems can be integrated with TC technology [5]. However, the size and complexity of these implementations is an almost unsurmountable obstacle for any attempt to seriously evaluate their actual security properties. Furthermore, the limited size of developer communities, difficulties of understanding and complexity of managing configurations and policies continue to be road blocks for deployment of trusted platforms ans systems on a wider scale. Compared to a fully fledged OS, the tasks of virtualization layers are very much reduced, so we anticipate to arrive at a much reduced size of the Trusted Computing Base for the OpenTC architecture. Due to the reduction in size, we expect the approach to be applicable across different types of platforms, including mobile ones. The architecture separates management and driver environments from the core system and hosted OS instances, thereby reducing the risk of the platform being subverted by rogue kernel components. Both drivers and management components can either be hosted under stripped-down Linux instances, or they can run as generic hypervisor tasks (in order to reduce the TCB size, the second alternative is preferable). The policy enforced by the monitors is separated from decision and enforcement mechanisms. It is human readable and can therefore be subjected to prior negotiations and explicit agreement. The goal of the OpenTC architecture is to provide execution environments for whole instances of guest operating systems that communicate to the outside world through reference monitors guarding their information flow properties. The monitor kick into action as soon as an OS instance is started. Typically, the policy enforced by it should be immutable during the lifetime of the instance: it can neither be changed through actions initiated by the hosted OS nor overridden by system management facilities. In the simplest case, this architecture will allow to run two independent OS instances with different grades of security lock-down on an end user system [6]. Clearly, more complex configurations are possible (as frequently needed in server scenarios). From Trusted to Trustworthy Computing TCG technology can not magically turn an ordinary computing platform into a more secure one. It offers little more than basic mechanisms to record and report the startup and runtime state of a platform in an extremely compressed and non-forgeable manner. A platform state is represented by a set of hash values that refer to binaries and configuration files constituting the platform's Trusted Computing base. Someone (an organization or individual) has to vouch (prove and attest) that a particular set of hashes is equivalent to a system configuration with a desired behavior (for example, that policies can not be changed in an arbitrary fashion). This attestation will, in turn, be based on atomic ones referring to properties of each relevant component. But unless the end users personally validate each components, their reasons to believe such statements, however, will ultimately stem from social trust, be it in statements from specific brands, certified public bodies, or peers groups. A much discussed dilemma arises if in order to achieve a desired goal, a user has no choice but to employ components that are suspicious to him but mandatory part of a configuration that is considered 'trusted' by a peer. This problem becomes worse if named components come as binaries only and do not allow for analysis. As the recent history of DRM technology has shown, this can easily be used to insert trojans under the guise of legitimate policy enforcement modules into the user's system. Allowing providers to enforce DRM on a specific piece of content I acquired from them does not imply a permission for this very mechanism to sift through my hard disk and report back on other content. This illustrates to an important principle for components that deserve the label 'trusted': at least in principle, it should be possible to investigate their actual trustworthiness. A clearly stated description of their function and expected behavior should be an integral part of their distribution, and it must be possible to establish that they do not display behavior other than that stated in their description – at compile time, runtime, or both. The TCG specification is silent on procedures or credentials that may be required before a software component can be called 'trusted'. OpenTC works on the assumption that we need defined methodologies, tools, and processes to describe goals and expected behavior of software components. On this basis, checks whether their implementation reflects (and is constrained to) this description can be performed. Independent replication of tests may be required to arrive at a commonly accepted view of a component's trustworthiness which in turn requires accessibility of code, design, test plans and environments for the components under scrutiny. A socially acceptable approach to trusted computing is likely to require a fair amount of transparency and open processes, and in this respect, a FOSS approach looks promising. It may turn into a crucial competitive advantage. Trust, Risk, and Freedom As it stands, most of us have little choice but to trust systems where more and more things can go wrong. At the same time, our insight in what is actually happening on our machines gets smaller by the day. This very much reduces the chances of estimating the risk and success probabilities for interactions. The risk becomes close to unmanageable if one has to account for the peer's unconstrained freedom, that is, for his ability to change the rules of the interaction by executing 'full control' over his platform. At worst, insistence on 'full control' displays ignorance of the technological evolution: most IT experts would readily admit that they do not actually understand any more what is going on here and now on their machines. At best, it is an elitist position of IT cognoscenti who forget that most computer systems are owned and operated by ordinary citizens. It is neither the job of these individuals to understand the guts of IT to a point where they can estimate the risk of an interaction, nor should it be their obligation. However, they are facing the absurd situation having to bear full legal responsibility for actions initiated on their machines while lacking the knowledge, tools and support to keep these systems in a state fit for purpose. What we need are reliable indicators whether it is safe to enter a remote transaction and mechanisms proving that due diligence has been performed. To answer the question of whether it is desirable or permissible to perform a specific action on a platform, there is no alternative to basing our decision on mechanisms that monitor and report the current state of the execution environment. This consequence follows necessarily from the ever growing complexity of IT. OpenTC assumes that the mutual attestation of the platform's 'fitness for purpose' will be required for proprietary systems as well as FOSS based ones. Enhanced protection, security and isolation features based on TCG technology will become standard elements of proprietary operating systems and software in due time. This evolution is largely independent of whether FOSS communities endorse or reject this technology. Lack of availability of comparable protection mechanisms for non-proprietary operating or software systems will immediately create problems for important segments of professional Linux users. It is therefore with some concern that we follow discussions on parts of GPL v3 that might regulate how Free Software and Trusted Computing technology can be combined. As a matter of principle, the question of whether software is secure and trustworthy is not only completely orthogonal to a licensing policy, not least because any responsibility on this matter is excluded in the GPL. Secondly, TC does not constrain the freedom of modifying and recompiling GPL'ed code, but taking the liberty of arbitrary modifications to a software component will necessarily invalidate security assurances for the unmodified one. A re-evaluation can establish that the original assurances still hole, but until a re-evaluation has taken place, the security properties of the modified versions are undefined. This is by no means specific to the TC approach, but is equally applicable e.g. to the Linux server distribution that have been evaluated according to the Common Criteria. A change to any of the evaluated component results in losing the certificate. Many commercial, public or governmental entities have chosen non-proprietary software for reasons of transparency and security. These organizations are typically subjected to stringent regulations requiring state-of-the-art protection mechanisms for their IT. If FOSS solutions do not support these mechanisms, the organizations could eventually be forced to replace them with proprietary ones. This situation would be highly undesirable for customers as well as providers of professional FOSS-based solutions, and to help avoiding such a situation to occur, a number of important industrial FOSS providers and contributors are participating in the OpenTC effort. OpenTC will help to keep the option open to choose between proprietary and FOSS solutions, and it will demonstrate in a practical way that Free/Open source based systems can benefit from Trusted Computing Technology. Footnotes / Links [1] The content of this paper is published under the sole responsibility of the author. It does not necessarily reflect the position of HP Laboratories or other OpenTC members. [2] Project Nr. 027635 [3] http://www.opentc.net [4] Stephen Shankland: XEN passes Windows Milestone http://news.com.com/Xen+passes+Windows+milestone/2100-7344_3-5842265.html [5] See e.g. Maruyama et al: Linux with TCPA Integrity Measurement. IBM Research Report RT0575, January 2003 http://www.research.ibm.com/trl/people/munetoh/RT0507.pdf [6] E.g, Butler Lampson's model with an uncons
rained 'green' environment for web browsing, software download / installation and a tightly guarded 'red' side for tax record, banking communications etc. See IEEE Security&Privacy, Vol 3, Nr 6, Nov/Dec 2006, p3

Über den Autor Dirk Kuhlmann: Dirk Kuhlmann works as a senior research engineer for the Trusted Systems group of Hewlett Packard's European Laboratories in Bristol, UK. He joined HP Lab's security research team ten years ago after having received his degrees in Computer Science from Technical University Berlin. His past activities include work on financial protocols, secure distributed transactions, and platform security. For a number of years now, Dirk's main research interest are the conditions and requirements of using Open Source software for IT security solutions. In this context, he has analyzed the complementarity of Trusted Computing Technology and Open Source based software in multiple publications. Dirk currently acts as the technical lead of the EC-funded, integrated project OpenTC. This project aims using Open Source based, trusted virtualization layers as central building block for security enhanced platforms and systems.

Download

Related

Embed

Share:

Tags