Linux Hypervisor
Linux Hypervisor
Hypervisor,又稱虛擬機器監視器(英語:virtual machine monitor,縮寫為 VMM),是用來建立與執行虛擬機器的軟體、韌體或硬體。
- 被hypervisor用來執行一個或多個虛擬機器的電腦稱為宿主機(host machine)
- 這些虛擬機器則稱為客戶機(guest machine)
- Type-1, native or bare-metal hypervisors These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems.
- Type-2 or hosted hypervisors These hypervisors run on a conventional operating system (OS) just as other computer programs do.
For this reason, they are sometimes called bare metal hypervisors.
A guest operating system runs as a process on the host.
QEMU is a type-2 hypervisor.
KVM只是一個kernel的modue,沒有user space的管理工具,KVM的虛擬機可以藉助QEMU的管理工具來管理。
QEMU和KVM結合的分支一段時間後和QEMU的主版本合併,就是說現在的QEMU版本默認支持KVM.
要使用KVM的最新版本,就要使用最新的kernel。
libvirt是一套開源的虛擬化的管理工具, libvirt可以實現虛擬機的管理,比如虛擬機的創建、啟動、關閉、暫停、恢復、遷移、銷毀。虛擬機網卡、硬碟、CPU、內存等多種設備的熱添加。
libvirt的設計目標是通過相同的方式管理不同的虛擬化引擎,比如KVM、XEN、HyperV、VMWare ESX等等。
ibvirt是一套開源的虛擬化的管理工具,主要由三部分組成:
- 一套API的lib庫,支持主流的程式語言,包括c,python,ruby等
- libvirtd服務
- 命令行工具virsh
What is a vCPU and How Do You Calculate vCPU to CPU?
What is a vCPU?
vCPU is the abbreviation for virtual CPU.As for a definition, a vCPU represents a portion or share of the underlying, physical CPU that is assigned to a particular virtual machine (VM).
在以往傳統的運作模式,一部實體伺服器可能配備有一顆或多顆實體的處理器(Physical CPU),但是卻只有一個作業系統在使用這些CPU。
這種情形往往造成CPU大部分的時間都處於閒置的狀態,因運算資源閒置的時間太長、太零散,所以如果有一種方法可以統籌這些運算資源,透過疏導與分配的方式,讓很多的OS排隊來利用這些閒置的資源,當有需要時,就來請求使用,不需要時,就不要霸佔著位置不放。
以一個VM來說,當你給了它兩顆vCPU的時候,並不是代表這個VM真的擁有兩個實體CPU的運算能力。
因為它的CPU是虛擬出來的,每個VM上的Guest OS所看到的CPU,其實都不是真的。
當虛擬的CPU能夠「對應」到一個實體運算單位(Logical CPU,或稱為Hardware Execution Context、HEC)的時候,它就真正取得了實體的運算能力。
假設要讓一個4GHz的實體CPU分給兩個VM同時來使用,希望VM_A拿到3GHz運算資源,另一個VM拿到1GHz運算資源:
- 一個logical CPU在同一時間,是不可能幫多個OS作運算的 在一個CPU Cycle單位時間內,一次只能處理一個執行緒,沒辦法被切成兩半,分割資源給VM_A,同時又切割給VM_B來使用。
- Hypervisor採用CPU Mapping的方式分配運算資源 logiccal CPU先四分之三的時間對應給VM_A,然後再將四分之一的時間對應給VM_B,實體CPU(或核心)一次只服務一個VM,並且這段時間是全速來幫它做運算。
- 當VM在競爭時才會按分配的時間比例, 當只有一個VM在使用CPU, 這個VM可以使用整個logical CPU
靠著時間分配快速切換於不同的VM之間,這樣子等於VM_A掌握了75%的運算資源,而VM_B則拿到25%,看起來VM_A就像是有3GHz的效能,而VM_B則是1GHz。
當有10個VM要使用運算資源時,每個VM都給予一個vCPU,那麼就會有10個vCPU隨時要對應這4個Logical CPU來取得運算資源。
透過VMkernel CPU Scheduler的分配來mapping目前閒置的Logical CPU,取得logical CPU的資源,若是此時4個Logical CPU都是忙碌狀態,VM就得排隊,運算資源依照比重來切換給每個VM運作。
- Hypervisor Think of a hypervisor as a controller. It’s sometimes referred to as a virtual machine monitor (VMM).
- Socket A socket is an array of pins that hold a processor in place and connect the motherboard to the available processing power.
- Thread A thread is a path of execution within a process.
- Physical Core A physical core, also referred to as processing units, within the CPU.
- Logical Core A logical core makes it possible for a single physical core to perform two or more actions simultaneously.
A hypervisor is software used to create and run virtual machines (VMs).
It allows one host computer to support multiple guest VMs by virtually sharing its resources such as memory and processing.
Hypervisors are smart enough to allocate resources whether a single vCPU or numerous vCPUs.
A process contains one or more threads.
The primary difference is that threads within the same process run in shared memory space.
A single physical core may correspond to one or more logical cores.
Logical CPU表示一個真實的運算單位(處理器或核心),例如一顆4核心的CPU,表面上看起來是一個CPU,但是因為內含有4個核心(Cores),而這些核心都是具有實體運算效能,所以真正的Logical CPU有4個。
Logical cores made the concept of hyper-threading (HTT) possible.
Hyper Threading(超執行緒)是Intel CPU的一項技術,在一個logical core整合兩個邏輯處理單元, 如果在實體伺服器BIOS開啟HT的功能,就會發現LCPU多了一倍出來,例如雙核心的CPU會變成了有4個Logical CPU。
There are limitations to hyper-threading versus the physical capabilities of the core.
How Does a vCPU Work?
The hypervisor uses a portion of the physical CPU computing resources and allocates it to a vCPU which is assigned to a specific VM.System administrators can use hypervisors to setup different resource allocations where specific VMs are configured with specific vCPU capabilities.
How Do You Calculate vCPU?
Number vCPU = (Threads x Cores) x Physical CPUFor ex., the Intel Xeon E-2288G include 8 cores / 16 threads :
(16 Threads x 8 Cores) x 1 CPU = 128 vCPUIf you have larger workload on VMs, you will have far fewer VMs to get more performance(vCPU) for each VM.
Virtual Processor Scheduling – VMware and Microsoft
However, bigger workloads need more processors and therefore in order to scale, a new way to schedule CPU cycles against any given processor core was necessary.
This is where Gang Scheduling comes into play : When a multi-vCPU machine requires processor time, all of the vCPUs are “ganged” together and scheduled to perform work against the physical cores.
This is done in order to streamline the process, and attempt to keep the processing synchronized.
Hyper-V does things a bit differently. Virtual processors ask for time on the physical cores and Hyper-V performs ultrafast calculations to determine which core is ready and available.
qcow/qcow2 Image File Format
The QCOW image format is one of the disk image formats supported by the QEMU processor emulator.It stands for "QEMU Copy On Write" and uses a disk storage optimization strategy that delays allocation of storage until it is actually needed.
它的概念跟 loopback file system 一樣, 基本上就是在實體硬碟上挖一塊空間來當做虛擬硬碟映象檔。 不同於 loopback FS 之處是: 比方說你指定 8G 的空間給它, 它一開始並不會真的用掉 8G, 而是從幾 MB (甚至更小) 開始, 隨時視需要慢慢長大。
It is a representation of a fixed size block device in a file.
Benefits it offers over using raw dump representation include:
- Smaller file size, even on filesystems which don't support holes (i.e. sparse files)
- Copy-on-write support, where the image only represents changes made to an underlying disk image
- Snapshot support, where the image can contain multiple snapshots of the images history
- Optional zlib based compression
- Optional AES encryption
- Create an image Create an qcow2 image file named with test.qcow2, size=4G
qemu-img create -f qcow2 test.qcow2 4G
- convert a raw image file named image.img to a qcow2 image file
$ qemu-img convert -f raw -O qcow2 image.img image.qcow2
$ qemu-img convert -f vmdk -O raw image.vmdk image.img
$ qemu-img convert -f vmdk -O qcow2 image.vmdk image.qcow2
Linux Qemu-img Command Tutorial
qemu-img allows you to create, convert and modify images offline.It can handle all image formats supported by QEMU.
qemu-img [standard options] command [command options]
- Create Disk Image Generally all VM’s uses one disk image file to read and write data.
Operating system and user level tools are stored in disk image files like physical disks.
To create a disk image "ubuntu.img" which is 10G:
qemu-img create ubuntu-10G.img 10G Formatting 'ubuntu-10G.img', fmt=raw size=10737418240The disk image format is raw and its actual size is 0 because there is no data in it.
But the VM will see disk image as 10G disk and will be able to use up to 10G.
Raw is default image format if no specific format is specified while creating disk images. Raw disk images do not have special features like compression, snapshot etc. Best thing about raw disk image is performance. Raw disk images are faster than other disk image types.
We can create qcow image with the -f option.
$ qemu-img create -f qcow2 -o size=10G ubuntu.img
$ qemu-img info focal-server-cloudimg-amd64.img image: focal-server-cloudimg-amd64.img file format: qcow2 virtual size: 2.2 GiB (2361393152 bytes) disk size: 525 MiB cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 16
- image shows the disk image file name
- virtual size shows the disk size of the image which will be read by VM
- disk size show the real size which hold in the host file system. It is 16K because there is no data on the disk image file
Shrink is done with convert command and copying existing disk image in to new one.
$ qemu-img convert -O qcow2 ubuntu.qcow2 ubuntu_s.qcow2
$ qemu-img resize ubuntu.qcow2 +5GB
KVM ( Kernel-based Virtual Machine )
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images.
Each virtual machine has private virtualized hardwares: a network card, disk, graphics adapter, etc.
Check if our hardware support KVM or not
KVM is not like XEN virtualization which do a Paravirtualization. KVM requires complete commitment form hardware CPU to do virtualization.
There are two mandatory hardware requirements for KVM/VMware ESXi server to get installed successfully.
- Hardware virtualization support in Host Processor is required To check if your processor is set with virtualization or not we have to search for vmx/svm in /proc/cpuinfo.
~$ grep -o vmx /proc/cpuinfo vmx vmx vmx vmxIntel Core2 duo shows 4 vmx entries.
Install KVM virtualization as a normal user
Install “kvm-ok” utility to determine if your server is capable of running hardware accelerated KVM virtual machines.$ sudo apt install cpu-checker $ sudo kvm-ok INFO: /dev/kvm exists KVM acceleration can be usedInstall KVM and its required packages,
sudo apt update sudo apt install qemu qemu-kvm libvirt-bin bridge-utils virt-managerOnce the above packages are installed successfully, then your local user will be added to the group libvirtd automatically.
Whenever we install qemu & libvirtd packages in Ubuntu 18.04 Server then it will automatically start and enable libvirtd service,
~$ service libvirtd status ● libvirtd.service - Virtualization daemon Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-12-24 19:35:20 CST; 1min 41s ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 5503 (libvirtd) Tasks: 19 (limit: 32768) CGroup: /system.slice/libvirtd.service ├─5503 /usr/sbin/libvirtd ├─5976 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/us └─5977 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/us 十二 24 19:35:20 jerry-Latitude-E6410 systemd[1]: Started Virtualization daemon. 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq[5976]: started, version 2.79 cachesize 150 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq[5976]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 n 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq-dhcp[5976]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq-dhcp[5976]: DHCP, sockets bound exclusively to interface virbr0 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq[5976]: reading /etc/resolv.conf 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq[5976]: using nameserver 127.0.0.53#53 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq[5976]: read /etc/hosts - 7 addresses 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq[5976]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses 十二 24 19:35:28 jerry-Latitude-E6410 dnsmasq-dhcp[5976]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Configure Host's Network Bridge for KVM virtual Machines
Network bridge is required to access the KVM based virtual machines from the host or outside the host.In Ubuntu 18.04 host, network is managed by netplan utility, netplan file is created under /etc/netplan/:
01-network-manager-all.yamlYou can modify the configuration to add a bridge then add the networking interface to that bridge as below:
network: version: 2 ethernets: ens33: dhcp4: no dhcp6: no bridges: br0: interfaces: [ens33] dhcp4: no addresses: [192.168.0.51/24] gateway4: 192.168.0.1 nameservers: addresses: [192.168.0.1]If you don't want to change your cirrent networking configuration, you can use NAT forwarding.
Guest Machine Installation
A VM Guest is comprised of:- an image file containing an operating system and data files
- a configuration file describing the VM Guest's virtual hardware resources.
Manage virtual machines with virt-manager
The Virtual Machine Manager is a desktop tool for managing VM Guests through libvirt.It provides the ability to control the life cycle of existing machines (bootup/shutdown, pause/resume, suspend/restore).
It lets you create new VM Guests and various types of storage, and manage virtual networks.
Access the graphical console of VM Guests with the built-in VNC viewer, and view performance statistics, all done locally or remotely.
To start the Virtual Machine Manager,
$ virt-managerFile-> New Virtual Machine ->
Virt-manager’s supporting tools:
- virt-install is a command line tool which provides an easy way to provision operating systems into virtual machines.
- virt-viewer is a lightweight UI interface for interacting with the graphical display of virtualized guest OS. It can display VNC or SPICE, and uses libvirt to lookup the graphical connection details.
- virt-clone is a command line tool for cloning existing inactive guests. It copies the disk images, and defines a config with new name, UUID and MAC address pointing to the copied disks.
- virt-xml is a command line tool for easily editing libvirt domain XML using virt-install’s command line options.
- virt-bootstrap is a command line tool providing an easy way to setup the root file system for libvirt-based containers.
Manage KVM Virtual Machines Using CLI
Virsh, short for Virtual Shell, is a command line user interface for managing virtual guest machines.
To view the list of available commands along with brief description:
$ virsh help
- list all configured guest VMs
$ virsh list --all Id Name State ---------------------------------------------------- - vm01 shut off
$ virsh start vm01 Domain vm01 started
$ virsh suspend vm01
$ virsh resume vm01
$ virsh shutdown vm01
virt-clone --original=vm01 --name=vm01-Clone --file=/var/lib/libvirt/images/vm01-clone.img
$ virsh undefine vm01
Configuring Guest Networking
Virtual machines must be able to connect to physical and virtual networks with their virtual network adapters.Guest (Virtual Machine) networking in kvm is the same as in qemu, so it is possible to refer to other documentation about networking in qemu.
There are two common setups for virtual networking: "virtual network" or "shared physical device".
NAT forwarding (aka "virtual networks")
- Host configuration Every standard libvirt installation provides NAT based connectivity to virtual machines. This is the so called 'default virtual network'.
/etc/libvirt/qemu/ └── networks ├── autostart │ └── default.xml -> /etc/libvirt/qemu/networks/default.xml └── default.xmlYou can verify if it is available:
$ virsh net-list --all Name State Autostart Persistent ---------------------------------------------------------- default active yes yesDump the configuration of the default network:
$ virsh net-dumpxml default <network connections='1'> <name>default</name> <uuid>2412578a-15cf-49b0-be45-9828a07b5eb8</uuid> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:42:52:65'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network>
If it is missing, then the example XML config can be reloaded and activated:
# virsh net-define /usr/share/libvirt/networks/default.xml Network default defined from /usr/share/libvirt/networks/default.xml # virsh net-autostart default Network default marked as autostarted # virsh net-start default Network default started
Once the libvirt default network is running, you will see an isolated bridge device. This device does not have any physical interfaces added.
# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254009b3dc0 yes virbr0-nic
Libvirt will add iptables rules to allow traffic to/from guests attached to the virbr0 device in the INPUT, FORWARD, OUTPUT and POSTROUTING chains.
# iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination RETURN all -- 192.168.122.0/24 224.0.0.0/24 RETURN all -- 192.168.122.0/24 255.255.255.255 MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24
libvirt then attempts to enable ip_forward:
# cat /proc/sys/net/ipv4/ip_forward 1
Some other applications may disable it, so the best option is to add the following to /etc/sysctl.conf:
# Uncomment the next line to enable packet forwarding for IPv4 net.ipv4.ip_forward=1
<source network='default'/>
Bridged networking (aka "shared physical device")
KVM forward ports to guests VM with UFW on Linux
Ubuntu – ARM KVM
REMOTE MANAGEMENT WITH SSH
The method described below uses the libvirt management connection, securely tunneled over an SSH connection, to manage the remote VM.
- Install SSH on both the remote VM an the local host
~$ sudo apt-get install -y openssh-server
- Generating the SSH key pair on the host
ssh-keygen -t rsa
$ scp jerry@192.168.0.105:/home/jerry/.ssh/id_rsa.pub . $ cat id_rsa.pub > .ssh/authorized_keys
$ ssh jerry@192.168.122.145
Xen
The Xen Project hypervisor is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host).
Managing Guest Virtual Machines with virsh
Mappings for DMI/SMBIOS to Linux and dmidecode
Information can be put into dmi tables via some qemu-system hosts (x86_64 and aarch64). That information is exposed in Linux under /sys/class/dmi/id and can be read with dmidecode.qemu-system-x86_64 -smbios type=<type>,field=value[,...]Example,
$ qemu-system-x86_64 -smp 2 -m 1500 -netdev user,id=mynet0,hostfwd=tcp::8022-:22,hostfwd=tcp::8090-:80 -device virtio-net-pci,netdev=mynet0 -vga qxl -drive file=iot-elgin-core-X00-20210110-24.img,format=raw -bios /usr/share/ovmf/OVMF.fd -smbios type=1,serial=ABCDEF789012,manufacturer=Jerry
The names are very annoyingly inconsistent. The point of this doc is to map them.
type | -smbios field | Linux path | dmidecode --string=F |
---|---|---|---|
0 | vendor | bios_vendor | bios-vendor |
0 | date | bios_date | bios-release-date |
0 | version | bios_version | bios-version |
0 | release=(%d.%d) | n/a | n/a |
0 | uefi=(on|off) | n/a | n/a |
1 | manufacturer | sys_vendor | system-manufacturer |
1 | product | product_name | system-product-name |
1 | version | product_version | system-version |
1 | serial | product_serial | system-serial-number |
1 | uuid | product_uuid | system-uuid |
1 | sku | n/a | n/a |
1 | family | product_family | n/a |
2 | manufacturer | board_vendor | baseboard-manufacturer |
2 | product | board_name | baseboard-product-name |
2 | version | board_version | baseboard-version |
2 | serial | board_serial | baseboard-serial-number |
2 | asset | asset_tag | baseboard-asset-tag |
2 | location | n/a | n/a |
3 | manufacturer | chassis_vendor | chassis-manufacturer |
3 | version | chassis_version | chassis-version |
3 | serial | chassis_serial | chassis-serial-number |
3 | asset | chassis_asset_tag | chassis-asset-tag |
3 | sku | n/a | n/a |
4 | sock_pfx | n/a | n/a |
4 | manufacturer | n/a | processor-manufacturer |
4 | version | n/a | processor-version |
4 | serial | n/a | n/a |
4 | asset | n/a | n/a |
4 | part | n/a | n/a |
11 | value | n/a | --oem-string=N |
17 | loc_pfx | n/a | n/a |
17 | bank | n/a | n/a |
17 | manufacturer | n/a | n/a |
17 | serial | n/a | n/a |
17 | asset | n/a | n/a |
17 | part=(%d) | n/a | n/a |
17 | speed=(%d) | n/a | n/a |
SMBIOS System Information
Vagrant
Vagrant is an open-source software product for building and maintaining portable virtual software development environments.
Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments:
- Provisioners are tools that allow users to customize the configuration of virtual environments. Puppet and Chef are the two most widely used provisioners in the Vagrant ecosystem (Ansible has been available since at least 2014[10]).
- Providers are the services that Vagrant uses to set up and create virtual environments. Support for VirtualBox, Hyper-V, and Docker virtualization ships with Vagrant, while VMware and AWS are supported via plugins.
可以在Virtual Box官網下載適合你平台的Virtual Box版本,在Vagrant官網下載適合你平台的Vagrant版本
當你已經安裝好Virtual Box以及Vagrant後,你要開始思考你想要在你的VM上使用什麼作業系統,一個打包好的作業系統環境在Vagrant稱之為Box,也就是說每個Box都是一個打包好的作業系統環境
Vagrant使用"Provisioners"和"Providers" 作為開發環境的構建模塊。
|--vagrant
|--Providers 如:VirtualBox、Hyper-V、Docker、VMware、AWS
|--Boxex 如:Centos7, Ubuntu。OS images
|--Provisioners 如:'yum intall -y python' 等自定义自动化脚本
Vagrantfile記錄Providers和Provisioners的相關信息,目的是幫助開發者更容易地與Providers交互。Vagrant提供的內嵌的Provider有 VirtualBox、Hyper-V、Docker、VMware
當使用VirtualBox等Provider,需要使用Boxes才能創建虛擬環境。當使用Docker作為Provider時,則不需要Boxes。
虛擬環境被創建後,開發者可以使用Provisioners自動化地自定義該虛擬環境。
Using existing virtual machines with Vagrant
In this case, the virtual machine was created from an ISO installation of Raspberry Pi Desktop.
There is a user account/passwd present on the machine that we want to reuse:
pi/xxx
If this VM boots locally and allows us to SSH into the machine normally, we can access the VM using the vagrant ssh method so that we can proceed to convert the virtual machine into a Vagrant box.
We can use a few simple commands to export a new Vagrant box.
The installation of guest additions for VirtualBox
Newly packaged boxes will need to have the guest additions installed prior to packaging.
The Guest Additions are designed to be installed inside a virtual machine after the guest operating system has been installed.
The Oracle VM VirtualBox Guest Additions are provided as a single CD-ROM image file which is called VBoxGuestAdditions.iso.
the Oracle VM VirtualBox Guest Additions for Linux are a set of device drivers and system applications which may be installed in the guest operating system.
On virtualbox menu go to File > Virtual Media Manager > Optical Disks. You should see VBoxGuestAdditions.iso file.
On my environment, this file is under /usr/share/virtualbox. Configure the drive to load this image:
Boot the VM then execute the following to install the VBoxLinuxAdditions:
cd /media/cdrom0
sudo sh VBoxLinuxAdditions.run
If the VBoxLinuxAdditions.iso doesn’t exist, download it first. Then, copy it from the host to the guest machine. On the Guest OS, mount the VBoxGuestAdditions_version.iso to /mnt:
sudo mount -o loop VBoxGuestAdditions.iso /mnt/
cd /mnt/
sudo sh VBoxLinuxAdditions.run
Packaging the existed VirtualBox VM
Before we can use the existed VM with Vagrant, we need to package the VM into an appropriate box format.- Find the name of the VM The name that VirtualBox assigns to the machine is displayed in the left-hand menu of the VM VirtualBox Manager console:
- Create a temporary workspace to package the box.
- Execute the packaging command. Warning! This is for VirtualBox only.
- Import the box file into your environment. Add the box with the command:
RasPiDesktop
mkdir ~/box-workspace
cd ~/box-workspace
vagrant package --base=RasPiDesktop --output=rpd_x86_latest.box
This command might take some time to execute; Vagrant is copying the existing VirtualBox machine into a box file along with some metadata that allows Vagrant to recognize the box file itself. When this command is finished, you'll end up with a box file called rpd_x86_latest.box in the working directory. The Vagrant box file is a file in a Unix Tape ARchive (TAR) format. If we untar the box file with the command:
tar tvf rpd_x86_latest.box
we can look at the contents of the box to see how it works. The following are the contents of the untarred file:
-rw-r--r-- jerry/jerry 505 2020-03-05 10:52 ./Vagrantfile
-rw-r--r-- jerry/jerry 25 2020-03-05 10:52 ./metadata.json
-rwx------ jerry/jerry 7553 2020-03-05 10:43 ./box.ovf
-rw-r--r-- jerry/jerry 6266143232 2020-03-05 10:52 ./box-disk001.vmdk
vagrant box add rpd_x86_latest.box --name=RasPiDesktop_x86
This command will copy the box to your local Vagrant cache, so you are now ready to directly use the box with Vagrant! Configuring a Vagrant environment
After the box is added to our local cache, we can configure a new Vagrant environment to use the box.
- Initialize a new Vagrant environment with our new box. Do this by executing the command:
- Modify the file Vagrantfile to use the correct user/passwd to SSH into the machine. Edit the Vagrantfile created in the previous step to include two new lines that specify parameters for the config.ssh parameters:
- Network configuration Modify the file Vagrantfile to use DHCP on the bridge adaptor:
- Boot the environment
- log in the VM
- Shared folder The setting is:
- Show the status of running VM
- Suspend running VM
- Shutdown the VM
- List the available boxes
- Remove a box
vagrant init RasPiDesktop_x86
This will create a basic Vagrantfile that uses our new RasPiDesktop_x86 box.
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "RasPiDesktop_x86"
config.ssh.username="pi"
config.ssh.password="xxx"
After the first login, Vagrant will place a public key in the appropriate account; so, if desired, the password can be removed from the Vagrantfile after the first boot.
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network"
vagrant up --provider=virtualbox
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'RasPiDesktop_x86'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: box-workspace_default_1583378481916_37742
Vagrant is currently configured to create VirtualBox synced folders with
the `SharedFoldersEnableSymlinksCreate` option enabled. If the Vagrant
guest is not trusted, you may want to disable this option. For more
information on this option, please refer to the VirtualBox manual:
https://www.virtualbox.org/manual/ch04.html#sharedfolders
This option can be disabled globally with an environment variable:
VAGRANT_DISABLE_VBOXSYMLINKCREATE=1
or on a per folder basis within the Vagrantfile:
config.vm.synced_folder '/host/path', '/guest/path', SharedFoldersEnableSymlinksCreate: false
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: pi
default: SSH auth method: password
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
default: /vagrant => /home/jerry/box-workspace
$ vagrant ssh
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
The default path on the host is the same as the Vagrantfile, the path on the guest VM is "/vagrant_data".
$ vagrant status
$ vagrant suspend
$ vagrant halt
$ vagrant box list
$ vagrant box remove a_box
File Provisioner
The Vagrant file provisioner allows you to upload a file or directory from the host machine to the guest machine.File provisioning is a simple way to, for example, replicate your local ~/.gitconfig to the vagrant user's home directory on the guest machine
Vagrant.configure("2") do |config| # ... other configuration config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig" endso you will not have to run
git config --globalevery time you provision a new VM.
Note that, unlike with synced folders, files or directories that are uploaded will not be kept in sync.
Project ACRN
It has a privileged management VM, called Service VM, to manage User VMs and do I/O emulation.
ACRN userspace is an application running in the Service VM that emulates devices for a User VM.
ACRN Hypervisor Service Module (HSM) is a kernel module in the Service VM which provides hypervisor services to the ACRN userspace.
Service VM User VM +----------------------------+ | +------------------+ | +--------------+ | | | | | |ACRN userspace| | | | | | +--------------+ | | | | |-----------------ioctl------| | | | ... |kernel space +----------+ | | | | | | HSM | | | | Drivers | | +----------+ | | | | +--------------------|-------+ | +------------------+ +---------------------hypercall----------------------------------------+ | ACRN Hypervisor | +----------------------------------------------------------------------+ | Hardware | +----------------------------------------------------------------------+ACRN userspace :
- allocates memory for the User VM
- configures and initializes the devices used by the User VM
- loads the virtual bootloader
- initializes the virtual CPU state
- handles I/O request accesses from the User VM
HSM implements hypervisor services by interacting with the ACRN Hypervisor via hypercalls.
HSM exports a char device interface (/dev/acrn_hsm) to userspace.
ACRN is built to virtualize embedded IoT and Edge development functions (for a camera, audio, graphics, storage, networking, and more), so it’s ideal for a broad range of IoT and Edge uses, including industrial, automotive, and retail applications.
Developer Reference
High-Level Design Guides
Tracing and Logging High-Level Design
Both Trace and Log are built on top of a mechanism named shared buffer (sbuf).Shared Buffer is a ring buffer divided into predetermined-size slots.
The sbuf is allocated by Service VM and assigned to HV via a hypercall. To hold pointers to sbuf passed down via hypercall, an array sbuf[ACRN_SBUF_ID_MAX] is defined in per_cpu region of HV, with predefined sbuf ID to identify the usage, such as ACRNTrace, ACRNLog, etc.
For each physical CPU, there is a dedicated sbuf.
Only a single producer is allowed to put data into that sbuf in HV, and a single consumer is allowed to get data from sbuf in Service VM.
Therefore, no lock is required to synchronize access by the producer and consumer.
- ACRN Trace
- ACRN Log
留言