12
XenSummit Topics Hardware virtualization provides a much narrower and state-limited interface between the hypervisor and the OS than conventionally exists between OS and applications. In this talk, I’ll discuss how this narrow and OS-agnostic interface allows the development of innovative low-level services to be built and delivered for existing legacy applications when they are run within VMs. I’ll begin by summarizing two such services that we’ve built at UBC: Remus, VM-based high availability, and Parallax, a storage virtualization system for virtual machine environments. I’ll then go on to sketch current work in the lab involving things like replay-based omniscient debugging, and transparent disaster recovery. Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari. In summary, we will describe implementation details of Kemari, and ongoing works to bring the current version to the latest version of Xen. Services in the Virtualization Plane HVM と PV Drivers を利用した Kemari の実装 Modernization of Kemari using HVM with PV Drivers 本発表では、フォールトトレランスのための仮想マシン同期機 構 Kemari の現状について 報告する。まず Kemari の実装に ついて解説し、続いて現在取り組んでいる最新版の Xen へ の対応について紹介する。 Topics 1

Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

XenSummit Topics

Hardware virtualization provides a much narrower and state-limited interface between the hypervisor and the OS than conventionally exists between OS and applications. In this talk, I’ll discuss how this narrow and OS-agnostic interface allows the development of innovative low-level services to be built and delivered for existing legacy applications when they are run within VMs.

I’ll begin by summarizing two such services that we’ve built at UBC: Remus, VM-based high availability, and Parallax, a storage virtualization system for virtual machine environments. I’ll then go on to sketch current work in the lab involving things like replay-based omniscient debugging, and transparent disaster recovery.

Andrew Warfield:

Yoshiaki Tamura

In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari. In summary, we will describe implementation details of Kemari, and ongoing works to bring the current version to the latest version of Xen.

Services in the Virtualization Plane

HVM と PV Drivers を利用した Kemari の実装Modernization of Kemari using HVM with PV Drivers

本発表では、フォールトトレランスのための仮想マシン同期機

構 Kemari の現状について 報告する。まず Kemari の実装に

ついて解説し、続いて現在取り組んでいる最新版の Xen へ

の対応について紹介する。

Topics

1

Page 2: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

In the server components of cloud computing, many concurrent services are consolidated onto the shared virtualized computing platform, such as VM-based data center. However, nowadays, such enterprise data centers are often under-utilized and idle-working even when workloads of some hosted services are high, which results from the barrier caused by the computer architecture and the operating system as well as the lack of efficient, on-demand and fine-grained resource scheduling. Based on the resource reallocating scheme provided by VMMs, most of the researchers, who focus on improving resource utilization as well as guaranteeing quality of the hosted services via on-demand resource scheduling models or algorithms within a physical server, could not propose good solutions to tradeoff between resource utilization and quality of the hosted services, for example, Padala(University of Michigan)’s controller (Eurosys’07) improves resource utilization and performance of some services by hugely reducing performance of other services. How to improve resource utilization, as well as guarantee quality of the hosted services, is a challenge in the VM-based data center. We propose a novel capacity service computing framework (RAINBOW) and a feedback-based multi-tiered resource scheduling scheme consisting of local and global scheduling algorithms for a VM-based data center to ensure QoS of the hosted services as well as improve resource utilization. We implement a Xen-based prototype to evaluate our scheme on a workload scenario reflecting resource demands in a real enterprise environment. The experimental results show that RAINBOW without resource flowing (RAINBOW-NF) provides 26%~324% improvements in service performance, as well as 26% higher the average CPU utilization than traditional service computing framework in typical enterprise environment. RAINBOW with multi-tiered resource scheduling further improves performance by 9%~16% for those critical services, 75% of the maximum margin, introducing up to 5% performance degradations to others, with 1%~5% improvements in resource utilization than RAINBOW-NF. Compared with the Padala’s controller, our multi-tiered resource scheduling leads to 9% less improvement for critical services, with 2% degradation to low priority services, 20X less than degradations such as 41% in the recent literature.

Ying Song

Rainbow: Capacity Oriented Virtualizaed Comput-ing Framework for Virtualized Data Center

2

Page 3: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Oracle Corporation announced & distributed Oracle VM server virtualization product from November 2007. Oracle VM based on Xen architecture and applied GPL rules for distribution. I’d like to introduce high-level difference of Xen and Oracle VM and Oracle’s IT architecture vision with virtualization technology. After that, I’d like to pick up several Oracle VM customer case studies. How customer improved their IT system technology & business standing point of view.

Hiromichi Itoh or Toru Miyahara:

Shinetsu Isawa:

For the System Integrater who is going to add the value to Xen server management system, I am going to explain practical application of Xen management API which controls distributed Xen systems finely. I will introduce the basic structure of Xen API and then explain how to manage the system with JRuby sample source code program.

Topics: Xen management system overview, Wire protocol, Xen management system architecture, Xen-API Class Hierarchy, Server Configuration, VM Lifecycle, Xen-API Basic example with JRuby code programs.

A Case Study of using Xen-based Poineer Shared Services Japan

Practical Applicatioin of Xen Management API with Light Weight Language (JRuby)

SIerの立場から、Xenサーバー管理システムに付加する価値

の創造として、Xenを細かく制御する管理APIを有効に活用す

るための技術を解説します.

Xen APIの基本的な仕組みから、JRubyから制御する方法まで

サンプルコードを提示して紹介いたします。

Iustin Pop:

Last year our group had a talk at Xen Summit November 2007 about Ganeti, the software we use to manage virtual machines inside Google (Roman Marxer: Ganeti - a Xen based high availability cluster). This year we would like to come back with another kind of talk - not so much technical, but more related to deployments, scaling, integration, etc. - basically the rest of the ‘virtualization’ environment, beside the virtualization software itself.

Project Ganeti at Google

2007年11月にOracle Corporationは、Xenのアーキテクチャを採用した仮想 化ソフトウェア「OracleVM」を発表し、GPLの規約に準拠しソフトウェアをリリース した。

本講演では、XenとOracle VM の差異、オラクルの考えるITアーキテクチャの 将来と仮想化技術について、ご紹介する。 さらにアメリカ及び日本で、OracleVMを導入した顧客事例を中心に、どのよう な課題を突破して、どのような成果を上げたかを技術的及びビジネス的な観点 から考察し、ご紹介する。

タイトル Xen のアーキテクチャを採用したOracleVMの導入事例について 講演者 日本オラクル

3

Page 4: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Toru Hayashi: Oracle VM Customer Case Study - Xen Architecture Based Virtual Systems

I will introduce libvirt and how to operate Xen domains with Perl/Python bindings of libvirt.I’ll also introduce the system managemant tool “Func”. It’s a python application and uses libvirt.

Gosuke Miyashita:

Yasushi Hiratani:

Operating Xen Domains through LL(Perl/Python) with libvirt

XenServer 5.0: How Citrix leverages open source Xen

Citrix XenServer 5.0 was released on October 15th in Japan for targeting on datacenter virtualization. This session is explained about how Citrix leverages open Source Xen to use it at datacenter for production system.

Xen のアーキテクチャを採用したOracleVMの導入事例について

(English Abstract)

XenServer 5.0: open Source Xenをベースとした最新仮想化ソフトウェア

10月にリリースしたCitrix XenServer 5.0はデータセンターのサーバーを仮想化することを目的に多くの機能強化を行いました。このセッションでは、OSS XenをベースとしたXenServer 5.0がどのようにOSS Xenを利用しデータセンターのサーバーを本番で使用できる製品としたのかを説明します

4

Page 5: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Toshiki Lida:

Xenにおける仮想化移行の事例と留意点Example and Bewaring Point fo P_to_V Operation Over Xen

Satoshi WatanabeBringing Xen to mission critical systems

With the virtualization and consolidation of our systems onto fewer physical servers, the need for high availability solutions has never been greater. Solutions that are simple to administer, cost-effective and capable of delivering the highest levels of availability will be key towards expanding the use of server virtualization across the enterprise and for mission critical applications. In the session, Marathon will introduce the architectural characteristics and benefits of everRun VM, a comprehensive software solution for virtual servers that provides “dial-able” availability permitting the choice of availability levels ranging from simple VM fail-over to full system fault tolerance.

Xenをミッションクリティカルシステムに導く

少数の物理サーバーにシステムを集約することで、仮想化環境における高可用性ソリューションの必要性がかつて無いほどに拡大しています。シンプルに管理でき、コスト効率が良く、最高レベルの可用性を提供できるソリューションが、サーバー仮想化の企業横断的な採用および、ミッションクリティカルアプリケーションでの利用を促進する上での鍵となります。本セッションでは、仮想サーバー向けの総合的な高可用性ソリューションであり、単純なフェイルオーバーから完全なシステムフォールトトレランスまで、「ダイアラブル(調整可能)」な可用性を提供するeverRun VMのアーキテクチャー上の特徴と利点を紹介します。

(English Abstract)

Isaku Yamahata: Paravirt_ops/ia64 Status Update

Now the xen/ia64 community has been working on ia64 paravirt ops to merge xen modification into the Linux upstream tree. This presentation will discuss on those activities and its future direction.

At first what x86 paravirt ops is will be reviewed roughly and then move on to ia64 specific topics. Next the paravirt ops on ia64 will be discussed, especially why it is necessary on ia64, what is the difference from x86’s one, why they are different, what challenges are, and the current approaches to those challenges. As the conclusion the current status of ia64 paravirt ops will be summarized and the future plan will be discussed.

The xen/ia64 community has been working on ia64 paravirt_ops to merge xen modification into the Linux upstream tree. And now the first minimal patches got merged. This presentation will discuss the current development status and its future directions.

本発表ではparavirt_ops/ia64の開発状況と今後の開発予定について 発表する

paravirt_ops/ia64状況報告

5

Page 6: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Stephen Maden:

Naoki Nishiguchi:

Human beings are very sensitive to jitter of response time in general, therefore it should be eliminated for client operating systems which have some GUIs on virtualized environment.

We have examined that the load of domains affects waiting time the GUI domain is on run queue (vary it from micro seconds to tens of milli seconds), and it causes the jitter of the response time. In this presentation, we report the measurement result and consideration of the reason. And also, we propose a new credit scheduler which solves such a problem.

Leveraging Xen for the Enterprise with VI Centre from Virtual Iron

クライアント・ハイパーバイザ としてのCreditスケジューラの評価と改善への取組み

Evaluation and consideration of the Credit Scheduler for client virtualization

概要

クライアント・ハイパーバイザ上で、ゲストOSとしてGUIを持つ

デスクトップ系OS(GUI VM)を動かす場合では、GUI VMにお

けるユーザのデスクトップ操作に対して、他ドメイン 負荷の影

響を与えないことが望ましい。Creditスケジューラに対して、ド

メイン負荷とGUI VMでのレスポンスのばらつきを測定 し、GUI

VMの実行待ち時間(ランキューにつながれている時間)が、ド

メイン負荷に応じて ばらつく(負荷ドメインの数によって変動す

るが、1ドメインでは、数マイクロ秒~数十ミ リ秒)ことが原因で

あると特定できたので、その測定結果及び、それに対する考

察に関し て報告する。また、問題解決へのアプローチとして、

取り組んでいる改良型Creditスケ ジューラについても紹介す

る。さらに、VT-dによりデバイスをパススルーされたGUI VMに

おけるレスポンスや、stubdom 使用時のレスポンスについても

測定を実施する予定である。

Virtual Iron is a company from the Boston area in the USA. It was founded in March 2003. Originally the company developed software that could run a single application on a pool of Linux servers concurrently. There was obvious traction in the market place to run multiple operating systems in the bounds of a single physical device. Virtual Iron had developed its own hypervisor too. In 2006 the company took what has proven to be a very good decision to drop its own hypervisor development and instead joined the Xen open source project. Now just two years later Virtual Iron has delivered Xen with its software worldwide and in many verticals. At this present day Virtual Iron is still strongly committed to Xen and making server virtualization simple and effective for small, medium and large customers alike. In Q4 2008 Virtual Iron will release version 4.5 of its self titled software and incorporate Xen 3.2 and Novel SLES 10SP2 kernel.

The sweet spot for Virtual Iron has been the SMB customer. The go to market strategy has always been, in every geography, distribution via the reseller channels. Virtual Iron’s partner programme, Channel One has been very successful in attracting partners seeking a real alternative to what’s already out there , and allowing resellers of all sizes to put together great solutions for their customers. It is interesting to note that Virtual Iron has won some very large business too, proving the software is very capable of providing a very solid platform for even the most demanding of use cases in the enterprise.

6

Page 7: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Sang-bum Suh:

Xen for Mobile Devices

Takahiro Shinagawa:

Introduction to the BitVisor and Comparison with Xen

This presentation shows an introduction to the BitVisor, a virtual machine monitor developed in Japan, and comparisons with Xen.

The BitVisor is developed under the Secure VM Project which is promoted by National Information Security Center (NISC) and mainly executed by the University of Tsukuba. The BitVisor provides OS-independent security functionalities including encryption of storages and networks, and identity management with smart cards. This presentation shows architectual differences, advantages and disadvantages, and performance comparisons between the BitVisor and Xen.

セキュアVM「BitVisor」の概要とXenとの比較

本発表では,純国産仮想マシンモニタ「BitVisor」の概要とXenとの 比較について述べる.BitVisorとは,内閣官房情報セキュリティセンターが推進 し筑波大学を中心として実施中の「セキュアVMプロジェクト」で開発中の仮想マ ンモニタであり,ストレージやネットワークに対するセキュリティ機能やIC カードを用いた認証機能などを提供する.本発表ではBitVisorとXenとのアーキ テクチャ上の違いや利害得失,性能比較などについて述べる.

(English Abstract)

Sasaki TakayukiA Fine-grained VM Access COntrol Framework for Secure Collaboration

Toward a secure collaboration workspace, we propose a fine-grained VM access control framework which allows plug-in modules to perform a variety of security functions specified by data sharing policies. We report a prototype of this framework on Xen 3.1.0 and an open plug-in interface for developers to easily extend security functionalities such as mail filtering, file encryption, logging, and virus scan. Also we show a use case in software development, and performance evaluation results.

7

Page 8: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Simon Horman & Hirokazu Takahashi:

This presentation will briefly look at the issue of isolating network and block device resources. The motivation being to allow domUs reasonable access to I/O and network bandwidth even if a domU hosted on the same machine attempts to consume all available resources - for instance if it has been compromised by a virus.

This block device portion of this presentation describes how to make guests on Xen share one or more data stores fairly. It will cover the design of dm-ioband, which is implemented a Linux kernel module. And results for guests sharing bandwidth using dm-ioband will be presented. The network bandwidth portion of this presentation will look at how existing frameworks provided by a Linux dom0 can be used to provide some form or isolation of network resources. Some performance results will be presented as well as some ideas for future improvements in this area.

Block Device & Network Bandwidth Isolation

Yuji Shimada:

I/O pass-through means that I/O device is assigned to domain and guest software can directly control it. In this presentation, I will talk about I/O pass-through for HVM domain. MMIO, PCI configuration register access, port I/O, DMA and interrupt are virtualized for I/O pass-through. We added a feature for reassigning MMIO resources in Domain 0, and improved the PCI configuration register access. As a result, now we can assign several types of I/O devices to HVM domain.

Our future plan is to develop “I/O device power management from Guest OS” and “interrupt delivery to CPU specified by guest OS”, improve “ioemu log”, and support “I/O pass-through using stub domain”.

There are two issues we are facing. First, some devices access own internal memory using the address obtained from device driver. In this case, the access will fail because IOMMU will not translate the address. We expect the adaptor’s vendor to design I/O device suitably for I/O pass-through. Second, a problem occurs when guest software needs to access some registers but it can’t, because ioemu does not support PCIe specific feature for now. We need your cooperation, because many features need to be implemented.

Development of the I/O Pass-through: Current Status and the future

パススルーI/Oの実装と今後

パススルーI/OとはDomainにI/Oデバイスを割り当て、直接制御させることである。

本発表ではHVM DomainのパススルーI/Oについて説明する。パススルーI/Oには、MMIO、PCIコンフィグレーション・レジスタ・アクセス、ポートI/O、DMA、割り込みの仮想化が必要である。

我々はMMIOリソース再割り当て機能をDomain 0へ追 加、PCIコンフィグレーション・レジスタ・アクセスの改造を行った。その結果、多種のI/OデバイスがHVM Domainに割り当て可能になった。

今後は、「Guest OSによるI/Oデバイスの電源制御」実装、「Guest OS指定のCPU への割り込み配送」実装、「ioemuのログ」改善、「StubDomainのパススルーI/O」 サポートを行う予定である。

我々が直面している二つの課題がある。第一に、内部メモリを持つI/Oデバイス の中には、デバイス・ドライバから受け取ったアドレスを使用して内部メモリへ アクセスするものがある。

その場合、IOMMUが働かないため、アクセスに失敗す る。アダプタ・ベンダがパススルーI/Oに適したデバイスを設計する必要がある。

第二に、ioemuがPCIe固有の機能に対応していないので、必要なレジスタにアク セスできない問題がある。実現には大規模な機能追加のため協力して開発する必 要がある。

ブロックデバイス及びネットワーク帯域分離

本発表ではネットワーク帯域及びディスクI/Oを分離する際の問題点に関し論ずる. domUがウイルスに感染した場合などリソースを占有しようとした場合であっても, ディスクI/Oとネットワーク帯域を公平に共有させる為のものである. ディスクI/OパートではI/O帯域を公平に共有させる為の方式について述べる.

Linuxカーネルモジュールとして実装したdm-iobandの設計を概観した後, 共有結果を示す. ネットワークパートではdom0 Linuxで既に利用可能なフレームワークについて 概観し利用結果を示す.その後さらなる発展に関するアイデアを提示する.

8

Page 9: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Koichi Onoue:Controlling System Calls and Protecting Application Data in Virtual Machines

Security systems can monitor and control behavior of untrusted programs and prevent attackers from tampering with computing resources. However, security systems can also be attacked. To protect security systems, it is useful for security systems to cooperate with a virtual machine monitor (VMM). In this presentation, we propose a security system which monitors and controls behavior of application programs (target applications) and protects data related to them from outside of untrusted VMs.

To control the behavior of applications, programs running outside of VMs control the system calls which the applications executed (monitor programs). To bridge the semantic gap between hardware-level states which the VMM can observe and software-level states which our system use for control of behavior of applications, we use information on guest OSes related to processes and system calls. System calls are intercepted by the VMM and controlled according to security policies described by users.

In addition we provide a mechanism to protect the data related to the target applications such as executables and configuration files application data). To protect the application data allocated on the guest physical memory without making the guest OS aware of the translation, the VMM provides different views of physical memory which depend on whether the target applications execute or the others execute. To protect files related to the target applications on the virtual disk, the monitor programs running in the other VM manages them. When the target applications operated the protected data, the operations were redirected to the monitor programs.

仮想マシン内のシステムコール制御とアプリケー ションのデータ保護

利用者の計算環境の安全性を向上させるために,サ

ンドボックスシステムやア ンチウィルスツールなどのセキュリティシステムを 利用することが一般的になっ てきている.しかし,セキュリティシステムはOSや 他のアプリケーションと同 じ実行空間で稼働するため,セキュリティシステム 自体も攻撃され得る.セキュ リティシステムの普及にともない,セキュリティシ ステムへの攻撃やそれらを 無効化する手法が複雑化・巧妙化し,その脅威はま すます深刻になってきてい る.セキュリティシステムを保護したり,それらの 無効化を困難にさせるため に,仮想マシンモニタ(VMM)を用いることはひとつ の有用な手段である.VMM を利用することで,仮想マシン(VM)単位の強い隔 離や,OSカーネルより下位 層で稼働するVMMによる制御が可能となる. 本研究では,制御対象となるVM(制御対象VM)の外 側からその内部で稼働させ るアプリケーションの安全性を向上させるセキュリ ティシステムを提案する. 提案システムにより,アプリケーションの振る舞い を制御し,アプリケーショ ンに関連したメモリ・仮想ディスク上のデータを保 護することができるように なる. アプリケーションの振る舞いを制御するために,提 案システムは制御対象VMの 外側で稼働させるプログラム(制御プログラム)に よって,アプリケーション が発行するシステムコールをセキュリティポリシー に基づいて制御する.VMMが 捕捉時に取得可能なハードウェアレベルの実行状態 から,OSレベルの実行状態 を復元するために,我々は仮想マシン内で稼働する OSにおけるプロセスやシス テムコールに関する情報を利用する.これにより, VM単位の隔離によって制御 プログラムに対する攻撃を困難にさせつつ,プロセ ス単位の制御ができるよう になる. メモリ上のアプリケーションデータを保護するため に,VMMが制御対象のアプリ ケーションに関するメモリ管理操作に介入する. ユーザレベルとカーネルレベ ルの動作レベルごとに異なる物理ページを操作する ように,物理メモリ上のア プリケーションデータを多重化する.このメモリ管 理操作への介入は制御対象 VM内のOSカーネルに意識させることなく行われる. 仮想ディスク上のアプリケーションデータを保護す るために,我々はバイナリ, 設定ファイルやポリシーファイルなどのアプリケー ションに関連するファイル を制御対象VMとは異なるVM(制御VM)で管理する.制 御対象のアプリケーショ ンが保護対象のファイルに関するシステムコールを 発行したときにVMMが捕捉し, VMMと制御VMによりそのファイル操作をエミュレート する.メモリ・仮想ディス ク上のデータ保護により,たとえOSカーネルを含む 他のプログラムが乗っ取ら れた場合でも,アプリケーションデータの漏洩や不 正改竄を防ぐことができる. 我々は準仮想化を利用したXenを用いて提案システム を設計・実装し,その評価 を行う.

9

Page 10: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Kenji Kono:VM-Based Approach to Detecting Stealthy Keyloggers

Spyware, which monitors the behavior of users and steals private information such as keystrokes and browsing patterns, is one of the major threats to the security of Internet users. Since spyware runs without interference with normal activity of the system and cleverly hides its presence from anti-spyware facilities, detecting spyware is particularly difficult. This paper focuses on a specific type of spyware: keyloggers. The goal of this work is to develop a methodology that can discriminate keyloggers from benign software. We introduce FoxyKBD, a virtual machine based technique that amplifies the behavior of keyloggers. FoxyKBD feeds quite a lot of keystrokes into the computer system so that the keyloggers heavily request disk access to log the keystrokes.

By statistically checking fluctuation in disk I/O of the system, FoxyKBD can judge keyloggers are installed on the investigated system. Our technique offers several advantages. First, subverting our virtual machine monitor (VMM) based approach is intrinsically difficult due to strong isolation between guest OSes and the VMM. Second, our approach, which is based on the behavior of the keylogger, allows us to properly discriminate keyloggers even if they utilize malicious techniques, such as code obfuscation, variant generation and rootkit techniques, to circumvent commodity anti-spyware facilities. Our experimental results demonstrate that FoxyKBD successfully judged 56 keyloggers collected from various shareware/freeware sites were keyloggers.

Hitoshi Matsumoto:

We propose the extension of pvSCSI driver which already merged into Xen 3.3.0.

The pvSCSI driver provides functionality that HVM/PV domain can issue SCSI command to physical SCSI devices in order that the domain can use tape drive, DVD-R/RW, RAID disk and so on. However, current implementation of backend driver passes only few mandatory SCSI commands to the devices and blocks almost all commands.

We improve this limitation by adding new SCSI tree mapping mode which assignes whole SCSI host to the guest domain. In addition to the improvements of pvSCSI driver, we are evaluating

VT-d functionality from the point of view of SCSI support. We briefly report current evaluation status and some problems found through the evaluation

SCSI Support Improvement SCSIサポートエンハンス

Xen3.3.0に取り込まれたpvSCSIドライバについての拡張を提案します。

Xen3.3.0でpvSCSIドライバが取り込まれたため、HVMゲスト上でSCSIコマンドを発行できるようになり、テープ装置やDVD-R/RW, RAID装置等をゲストから使用できる最低限の環境は整いました。しかし、これらの装置へのアクセスはブロックレベルに留まっており、総てのSCSIコマンドをゲストから使用することはできません。このため、SCSI装置の持っている総ての機能を使うために総てのSCSIコマンドをゲストから使用できるように拡張しました。

また、SCSIをサポートする観点からVt-d環境での動作を評価したので、現在解っている問題点と開発を進める上での課題を議論したいと思います。

10

Page 11: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

PCI-SIG I/O Virtualization Specifications (released in 2008) enabled design of high speed I/O PCI-E devices that can be natively shared between multiple Operating Systems running simultaneously within a single server.

Multiple functions in Single Root I/O Virtualization (SR IOV) compliant I/O devices like Neterion x3100 10GbE NIC can be used for VF Assignment and Direct Hardware Access from DomU, to eliminate the performance impact of virtualization - while preserving virtualization advantages like migration and Dom0 privileged operations.

With each function implemented as a fully provisioned netdev interface (rather than a subset of a networking interface), it is possible to use native Linux and Windows networking drivers running in DomU. This approach has number of advantages (performance, support/distribution, feature set, etc) over traditional frontend/backend model.

Performance results for “native”, virtualized and direct hardware access approach on the same server platform will be presented.

Leonid Grossman:

Networking Via Direct Function Assignment

Yosuke Iwamatsu:

PCI pass-through is considered as a key technology to improve IO performance and VM scalability in virtualized environments. However, it could spoil the essential features of virtualization such as live migration and device sharing.

PCI hot-plug for VM is one of the solutions for this problem. By inserting or removing physical PCI devices dynamically into or from a running VM, we will be able to relocate the VM to another machine and reuse the devices for other VMs. This presentation will overview the implementation of PCI hot-plug support for PV domains which is available in current Xen.

PCI Hotplug for PV Domains 準仮想化ドメインのためのPCI Hot-Plug

PCIパススルーは、仮想化環境においてIO性能やVMのスケーラビリティを向上するための主要技術と考えられている。しかし、ライブマイグレーションやデバイスの共有といった仮想化の重要な特長が損なわれる懸念がある。

この問題の解決策の一つとして、仮想マシンに対するPCI Hot-Plug機能が挙げられる。稼動中の仮想マシンに対して物理PCIデバイスを動的に追加または削除することで、仮想マシンの別ホストへの移動や、他の仮想マシンへのデバイスの再利用が可能になる。本発表では、最新のXenで利用できる、準仮想化ドメインのPCI Hot-Plugの実装について概観する。

11

Page 12: Topics...Andrew Warfield: Yoshiaki Tamura In our presentation this summit, we will report some updates on our Virtual machine synchronization mechanism for fault tolerance, Kemari

Supporting USB peripheral devices within guest domains is quite essential especially for the client-side virtualization, because almost all PC has USB ports and lots of USB devices exist. Currently, Xen has two options for providing USB devices to guest domains. However, both is not always suitable for the client-side virtualization.

The first is to simply pass-through a USB host controller(PCI device) to guest domains. This works well, but the problem is flexibility. In this option, entire USB host controller has to be assigned to a single domain statically, and all of USB devices which belong to the controller are assigned to that domain. That means the other domains cannot use the USB host controller.

The second is to pass-through USB devices by QEMU. QEMU is a good emulator, but about that USB emulation, it is implemented as a user-space driver, so that has several restrictions regarding to device handling and the limitation of performance.

We’ve been developing paravirtualized USB driver to achieve a good balance between flexibility and performance. We did a presentation about the design concept of paravirtualized USB driver on XCI Meeting (August 19). http://blog.xen.org/index.php/2008/08/18/xen-client-initiative-meeting-august-19/

At this time, we would like to report our development status and implementation details.It’s a new implementation for current Xen.

The main feature is as follows.

USB2.0 support•Flexible and hot-pluggable device assignment.•Planning Netchannel2 support.•

Noboru Iwamatsu:

Paravirtualized USB Support for Xen Xenの準仮想化USBドライバの開発

クライアント仮想化において、USB機器をゲストドメインから利用 することは、非常に重要であると言えるだろう。ほとんど全てのPC がUSBポートを備えており、数え切れないほどのUSB機器が存在して いるのである。現行のXenにはゲストドメインからUSB機器を利用する ための2つの方法があるが、いずれもクライアント仮想化で利用する上 で十分であるとは言えない。第一の方法は、USBホストコントローラ(PCIデバイス)をゲストドメイン にそのままパススルーする方法である。この方法は概ね動作する が柔軟性が犠牲となる。この方法ではUSBホストコントローラがまるごと 1つのドメインに固定的に割り当てられるため、コントローラ配下の 全USBデバイスもそのドメインに接続されてしまい、他のドメインから 利用することができなくなってしまう。 第二の方法は、QEMUデバイスモデルによりUSB機器をデバイス単位で パススルーする方法である。QEMUは優れたデバイスモデルであるが、 USBエミュレーションに関して言えば、ユーザ空間ドライバとして実装 されているためにデバイス制御や性能に制約が生じてしまう。

私達は、USB機器を利用する上での柔軟性と性能を両立させるために、 準仮想化USBドライバの開発を開始し、そのデザインコンセプトについて 8月19日のXCIミーティングでプレゼンテーションを行った。

http://blog.xen.org/index.php/2008/08/18/xen-cli-ent-initiative-meeting-august-19/

今回、USB準仮想化USBドライバの開発状況と実装の詳細について報告する。 本ドライバは現行のXenへの新規の実装であり、主な特徴は以下の通り である。

・USB2.0 対応

・柔軟かつホットプラグ可能なデバイス割り当て

・Netchannel2対応(予定)

12/End