Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Simos Xenitellis: How to run Wine (graphics-accelerated) in an LXD container on Ubuntu

$
0
0

Wine lets you run Windows programs on your GNU/Linux distribution.

When you install Wine, it adds all sort of packages, including 32-bit packages. It looks quite messy, could there be a way to place all those Wine files in a container and keep them there?

This is what we are going to see today. Specifically,

  1. We are going to create an LXD container, called wine-games
  2. We are going to set it up so that it runs graphics-accelerated programs. glxinfo will show the host GPU details.
  3. We are going to install the latest Wine package.
  4. We are going to install and play one of those Windows games.

Creating the LXD container

Let’s create the new container.

$ lxc launch ubuntu:x wine-games
Creating wine-games
Starting wine-games
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called wine-games.

Let’s also install our initial testing apps. The first is xclock, the simplest X11 GUI app. And glxinfo, that shows details about graphics acceleration. We will be fine to continue with Wine, if both xclock and glxinfo work in the container!

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo apt update
ubuntu@wine-games:~$ sudo apt install x11-apps
ubuntu@wine-games:~$ sudo apt install mesa-utils
ubuntu@wine-games:~$ exit
$

We execute a login shell in the wine-games container as user ubuntu, the default non-root username in Ubuntu LXD images.

Then, we run apt update in order to update the package list and be able to install the subsequent two packages that provide xclock and glxinfo respectively. Finally, we exit the container.

Setting up for graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

First, we run (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command adds a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our wine-games LXD container, and restart the container for the change to take effect.

$ lxc config set wine-games raw.idmap "both $UID 1000"
$ lxc restart wine-games
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000).

Let’s attempt to run xclock in the container.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ xclock
Error: Can't open display: 
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock
Error: Can't open display: :0
ubuntu@wine-games:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set up the DISPLAY to the default :0 (either a Unix socket or port 6000), which do not work either because we did not set them up yet. Let’s do that.

$ lxc config device add wine-games X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0
$ lxc config device add wine-games Xauthority disk path=/home/ubuntu/.Xauthority source=/home/myusername/.Xauthority
$

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our X server to allow the access.

Let’s see what we got now.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock

ubuntu@wine-games:~$ glxinfo 
name of display: :0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
...
ubuntu@wine-games:~$ echo "export DISPLAY=:0">> ~/.profile 
ubuntu@wine-games:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Installing Wine

We install Wine in the container according to the instructions at https://wiki.winehq.org/Ubuntu.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo dpkg --add-architecture i386 
ubuntu@wine-games:~$ wget https://dl.winehq.org/wine-builds/Release.key
--2017-05-01 21:30:14--  https://dl.winehq.org/wine-builds/Release.key
Resolving dl.winehq.org (dl.winehq.org)... 151.101.112.69
Connecting to dl.winehq.org (dl.winehq.org)|151.101.112.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3122 (3.0K) [application/pgp-keys]
Saving to: ‘Release.key’

Release.key                100%[=====================================>]   3.05K  --.-KB/s    in 0s      

2017-05-01 21:30:15 (24.9 MB/s) - ‘Release.key’ saved [3122/3122]

ubuntu@wine-games:~$ sudo apt-key add Release.key
OK
ubuntu@wine-games:~$ sudo apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/
ubuntu@wine-games:~$ sudo apt-get update
...
Reading package lists... Done
ubuntu@wine-games:~$ sudo apt-get install --install-recommends winehq-devel
...
Need to get 115 MB of archives.
After this operation, 715 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
ubuntu@wine-games:~$

715MB?!? Sure, bring it on. All that stuff will stay in the container! 🙂

Let’s run a game in the container

Here is a game that looks good for our test, Season Match 4. Let’s play it.

ubuntu@wine-games:~$ wget http://cdn.gametop.com/free-games-download/Season-Match4.exe
ubuntu@wine-games:~$ wine Season-Match4.exe 
...
ubuntu@wine-games:~$ cd .wine/drive_c/Program\ Files\ \(x86\)/GameTop.com/Season\ Match\ 4/
ubuntu@wine-games:~/.wine/drive_c/Program Files (x86)/GameTop.com/Season Match 4$ wine SeasonMatch4.exe

Here is the game, and it works. We did not set up sound in this post, nor did we make nice shortcuts so that we can run these apps with a single click. That’s material for a future tutorial!


The Fridge: Ubuntu Weekly Newsletter Issue 506

$
0
0

Welcome to the Ubuntu Weekly Newsletter. This is issue #506 for the week of April 24 – 30, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Canonical Design Team: April’s reading list

James Page: OpenStack Charms in Boston

$
0
0

At next weeks OpenStack Summit in Boston, the OpenStack Charms team will be holding an onboarding workshop on Monday at 4:40pm in MR-105.

This is a great opportunity to learn more about the project both in terms of how to get started using the OpenStack Charms to deploy OpenStack, and how to get involved with the project from from a contribution perspective!

Let us know if you’re coming along and what you’d like to get out of the session here.

Looking forward to seeing you all next week!


Kees Cook: security things in Linux v4.11

$
0
0

Previously: v4.10.

Here’s a quick summary of some of the interesting security things in this week’s v4.11 release of the Linux kernel:

refcount_t infrastructure

Building on the efforts of Elena Reshetova, Hans Liljestrand, and David Windsor to port PaX’s PAX_REFCOUNT protection, Peter Zijlstra implemented a new kernel API for reference counting with the addition of the refcount_t type. Until now, all reference counters were implemented in the kernel using the atomic_t type, but it has a wide and general-purpose API that offers no reasonable way to provide protection against reference counter overflow vulnerabilities. With a dedicated type, a specialized API can be designed so that reference counting can be sanity-checked and provide a way to block overflows. With 2016 alone seeing at least a couple public exploitable reference counting vulnerabilities (e.g. CVE-2016-0728, CVE-2016-4558), this is going to be a welcome addition to the kernel. The arduous task of converting all the atomic_t reference counters to refcount_t will continue for a while to come.

CONFIG_DEBUG_RODATA renamed to CONFIG_STRICT_KERNEL_RWX

Laura Abbott landed changes to rename the kernel memory protection feature. The protection hadn’t been “debug” for over a decade, and it covers all kernel memory sections, not just “rodata”. Getting it consolidated under the top-level arch Kconfig file also brings some sanity to what was a per-architecture config, and signals that this is a fundamental kernel protection needed to be enabled on all architectures.

read-only usermodehelper

A common way attackers use to escape confinement is by rewriting the user-mode helper sysctls (e.g. /proc/sys/kernel/modprobe) to run something of their choosing in the init namespace. To reduce attack surface within the kernel, Greg KH introduced CONFIG_STATIC_USERMODEHELPER, which switches all user-mode helper binaries to a single read-only path (which defaults to /sbin/usermode-helper). Userspace will need to support this with a new helper tool that can demultiplex the kernel request to a set of known binaries.

seccomp coredumps

Mike Frysinger noticed that it wasn’t possible to get coredumps out of processes killed by seccomp, which could make debugging frustrating, especially for automated crash dump analysis tools. In keeping with the existing documentation for SIGSYS, which says a coredump should be generated, he added support to dump core on seccomp SECCOMP_RET_KILL results.

structleak plugin

Ported from PaX, I landed the structleak plugin which enforces that any structure containing a __user annotation is fully initialized to 0 so that stack content exposures of these kinds of structures are entirely eliminated from the kernel. This was originally designed to stop a specific vulnerability, and will now continue to block similar exposures.

ASLR entropy sysctl on MIPS
Matt Redfearn implemented the ASLR entropy sysctl for MIPS, letting userspace choose to crank up the entropy used for memory layouts.

NX brk on powerpc

Denys Vlasenko fixed a long standing bug where the kernel made assumptions about ELF memory layouts and defaulted the the brk section on powerpc to be executable. Now it’s not, and that’ll keep process heap from being abused.

That’s it for now; please let me know if I missed anything. The v4.12 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Raphaël Hertzog: My Free Software Activities in April 2017

$
0
0

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 10 hours to work on security updates for Debian 7 Wheezy and had 1.5 hours remaining from March. During this time I did the following:

  • I released DLA-905-1 on ghostscript fixing 3 CVE. I also triaged two other ghostscript CVE that were not relevant to the version in wheezy.
  • I started to look into CVE-2016-10209 for libarchive but was not able to reproduce the segfault and marked it as not worth an update (same decision as security team).
  • After many tries to get more details from upstream of libxml-twig-perl on CVE-2016-9180, I decided that the low severity of the issue was not worth spending more time on it (same decision as RedHat and Debian security team).
  • I released DLA-921-1 on slurm-llnl fixing 1 high-severity CVE.
  • I investigated CVE-2016-8686 on potrace and marked it as not requiring an update because the impact is very low. I documented the fact that it’s fixed in unstable and asked the upstream author for the specific patch (no answer yet though).

Kali and pkg-security

I updated the britney instance that we are using in Kali and spotted two small documentation mistakes that I fixed.

We had a long-standing bug in Kali where extensions would stay visible on the lock screen. It was hard to reproduce and this month we finally managed to nail down the conditions required to reproduce it. It turns out that EasyScreenCast was the culprit. We paid Emilio Pozuelo Monfort to work on a patch and he fixed the problem in EasyScreenCast and also in gnome-shell, as a buggy extension should not have resulted in this behavior.

I responded to multiple queries of new contributors in the pkg-security team. The team is rather active and it would be great if we could have a few more Debian developers to help review and sponsor the work our enthusiastic new members.

Thanks

See you next month for a new summary of my activities. Hopefully, I will be more active again… between kids’ vacations, French elections and Zelda Breadth of the Wild, I got very much distracted from Debian last month. 🙂

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Kubuntu General News: 17.10 Wallpaper Contest! Call for artists

$
0
0

For the Artful cycle, we’re trying something new for Kubuntu: a wallpaper contest!

Any user can enter their own piece of artwork; you do not have to be a K/ubuntu member. Kubuntu members will be voting on the wallpaper entries. The top ten wallpapers will be on the Artful ISO. The Kubuntu Council will deal with any ties.

Upload your original work:
https://www.flickr.com/groups/kubuntu-cws-1710/

Follow the Ubuntu Free Culture Showcase examples:
https://wiki.ubuntu.com/UbuntuFreeCultureShowcase
https://www.flickr.com/groups/ubuntu-fcs-1704/

Submissions should have a human-language title and the description should give the author’s name if not entered as a display name on Flickr.

License your entry using the Creative Commons Attribution-ShareAlike or Creative Commons Attribution license. As an exception, we will consider images licensed as “Public Domain” that are submitted to this contest as being under the Creative Commons Zero waiver.

Only submit your own work; no more than two entries per person.

All entries must follow the Ubuntu Code of Conduct.

Submission Deadline: June 8 2017 | 
Winners announced June 22 2017

Simos Xenitellis: How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

$
0
0

In How to run Wine (graphics-accelerated) in an LXD container on Ubuntu we had a quick look into how to run GUI programs in an LXD (Lex-Dee) container, and have the output appear on the local X11 server (your Ubuntu desktop).

In this post, we are going to see how to

  1. generalize the instructions in order to run most GUI apps in a LXD container but appear on your desktop
  2. have accelerated graphics support and audio
  3. test with Firefox, Chromium and Chrome
  4. create shortcuts to easily launch those apps

The benefits in running GUI apps in a LXD container are

  • clear separation of the installation data and settings, from what we have on our desktop
  • ability to create a snapshot of this container, save, rollback, delete, recreate; all these in a few seconds or less
  • does not mess up your installed package list (for example, all those i386 packages for Wine, Google Earth)
  • ability to create an image of such a perfect container, publish, and have others launch in a few clicks

What we are doing today is similar to having a Virtualbox/VMWare VM and running a Linux distribution in it. Let’s compare,

  • It is similar to the Virtualbox Seamless Mode or the VMWare Unity mode
  • A VM virtualizes a whole machine and has to do a lot of work in order to provide somewhat good graphics acceleration
  • With a container, we directly reuse the graphics card and get graphics acceleration
  • The specific set up we show today, can potential allow a container app to interact with the desktop apps (TODO: show desktop isolation in future post)

Browsers have started having containers and specifically in-browser containers. It shows a trend towards containers in general, it is browser-specific and is dictated by usability (passwords, form and search data are shared between the containers).

In the following, our desktop computer will called the host, and the LXD container as the container.

Setting up LXD

LXD is supported in Ubuntu and derivatives, as well as other distributions. When you initially set up LXD, you select where to store the containers. See LXD 2.0: Installing and configuring LXD [2/12] about your options. Ideally, if you select to pre-allocate disk space or use a partition, select at least 15GB but preferably more.

If you plan to play games, increase the space by the size of that game. For best results, select ZFS as the storage backend, and place the space on an SSD disk. Also Trying out LXD containers on our Ubuntu may help.

Creating the LXD container

Let’s create the new container for LXD. We are going to call it guiapps, and install Ubuntu 16.04 in it. There are options for other Ubuntu versions, and even other distributions.

$ lxc launch ubuntu:x guiapps
Creating guiapps
Starting guiapps
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| guiapps       | RUNNING | 10.0.185.204(eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called guiapps.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. The second is glxinfo, that shows details about graphics acceleration. The third, glxgears, a minimal graphics-accelerated application. The fourth is speaker-test, to test for audio. We will know that our set up works, if all three xclock, glxinfo, glxgears and speaker-test work in the container!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ sudo apt update
ubuntu@guiapps:~$ sudo apt install x11-apps
ubuntu@guiapps:~$ sudo apt install mesa-utils
ubuntu@guiapps:~$ sudo apt install alsa-utils
ubuntu@guiapps:~$ exit $

We execute a login shell in the guiapps container as user ubuntu, the default non-root user account in all Ubuntu LXD images. Other distribution images probably have another default non-root user account.

Then, we run apt update in order to update the package list and be able to install the subsequent three packages that provide xclock, glxinfo and glxgears, and speaker-test (or aplay). Finally, we exit the container.

Mapping the user ID of the host to the container (PREREQUISITE)

In the following steps we will be sharing files from the host (our desktop) to the container. There is the issue of what user ID will appear in the container for those shared files.

First, we run on the host (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command appends a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our guiapps LXD container, and restart the container for the change to take effect.

$ lxc config set guiapps raw.idmap "both $UID 1000"
$ lxc restart guiapps
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Configuring graphics and graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

Let’s attempt to run xclock in the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ xclock
Error: Can't open display: 
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock
Error: Can't open display: :0
ubuntu@guiapps:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add guiapps X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add guiapps Xauthority disk path=/home/ubuntu/.Xauthority source=/home/${USER}/.Xauthority

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add guiapps mygpu gpu

We add the gpu device, and we happen to name it mygpu (any name would suffice). The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running). Note that for Intel GPUs (my case), you may not need to add this device.

Let’s see what we got now.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock

ubuntu@guiapps:~$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Intel Open Source Technology Center (0x8086)
...
ubuntu@guiapps:~$ glxgears
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
345 frames in 5.0 seconds = 68.783 FPS
309 frames in 5.0 seconds = 61.699 FPS
300 frames in 5.0 seconds = 60.000 FPS
^C
ubuntu@guiapps:~$ echo "export DISPLAY=:0">> ~/.profile 
ubuntu@guiapps:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Configuring audio

The audio server in Ubuntu desktop is Pulseaudio, and Pulseaudio has a feature to allow authenticated access over the network. Just like the X11 server and what we did earlier. Let’s do this.

We install the paprefs (PulseAudio Preferences) package on the host.

$ sudo apt install paprefs
...
$ paprefs

This is the only option we need to enable (by default all other options are not check and can remain unchecked).

That is, under the Network Server tab, we tick Enable network access to local sound devices.

Then, just like with the X11 configuration, we need to deal with two things; the access to the Pulseaudio server of the host (either through a Unix socket or an IP address), and some way to get authorization to access the Pulseaudio server. Regarding the Unix socket of the Pulseaudio server, it is a bit of hit and miss (could not figure out how to use reliably), so we are going to use the IP address of the host (lxdbr0 interface).

First, the IP address of the host (that has Pulseaudio) is the IP of the lxdbr0 interface, or the default gateway (ip link show). Second, the authorization is provided through the cookie in the host at /home/${USER}/.config/pulse/cookie Let’s connect these to files inside the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ echo export PULSE_SERVER="tcp:`ip route show 0/0 | awk '{print $3}'`">> ~/.profile

This command will automatically set the variable PULSE_SERVER to a value like tcp:10.0.185.1, which is the IP address of the host, for the lxdbr0 interface. The next time we log in to the container, PULSE_SERVER will be configured properly.

ubuntu@guiapps:~$ mkdir -p ~/.config/pulse/
ubuntu@guiapps:~$ echo export PULSE_COOKIE=/home/ubuntu/.config/pulse/cookie >> ~/.profile
ubuntu@guiapps:~$ exit
$ lxc config device add guiapps PACookie disk path=/home/ubuntu/.config/pulse/cookie source=/home/${USER}/.config/pulse/cookie

Now, this is a tough cookie. By default, the Pulseaudio cookie is found at ~/.config/pulse/cookie. The directory tree ~/.config/pulse/ does not exist, and if we do not create it ourselves, then lxd config will autocreate it with the wrong ownership. So, we create it (mkdir -p), then add the correct PULSE_COOKIE line in the configuration file ~/.profile. Finally, we exit from the container and mount-bind the cookie from the host to the container. When we log in to the container again, the cookie variable will be correctly set!

Let’s test the audio!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@pulseaudio:~$ speaker-test -c6 -twav

speaker-test 1.1.0

Playback device is default
Stream parameters are 48000Hz, S16_LE, 6 channels
WAV file(s)
Rate set to 48000Hz (requested 48000Hz)
Buffer size range from 32 to 349525
Period size range from 10 to 116509
Using max buffer size 349524
Periods = 4
was set period_size = 87381
was set buffer_size = 349524
 0 - Front Left
 4 - Center
 1 - Front Right
 3 - Rear Right
 2 - Rear Left
 5 - LFE
Time per period = 8.687798 ^C
ubuntu@pulseaudio:~$

If you do not have 6-channel audio output, you will hear audio on some of the channels only.

Let’s also test with an MP3 file, like that one from https://archive.org/details/testmp3testfile

ubuntu@pulseaudio:~$ sudo apt install mpg123
...
ubuntu@pulseaudio:~$ wget https://archive.org/download/testmp3testfile/mpthreetest.mp3
...
ubuntu@pulseaudio:~$ mplayer mpthreetest.mp3 
MPlayer 1.2.1 (Debian), built with gcc-5.3.1 (C) 2000-2016 MPlayer Team
...
AO: [pulse] 44100Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
A:   3.7 (03.7) of 12.0 (12.0)  0.2% 

Exiting... (Quit)
ubuntu@pulseaudio:~$

All nice and loud!

Troubleshooting sound issues

AO: [pulse] Init failed: Connection refused

An application tries to connect to a PulseAudio server, but no PulseAudio server is found (either none autodetected, or the one we specified is not really there).

AO: [pulse] Init failed: Access denied

We specified a PulseAudio server, but we do not have access to connect to it. We need a valid cookie.

AO: [pulse] Init failed: Protocol error

You were trying as well to make the Unix socket work, but something was wrong. If you can make it work, write a comment below.

Testing with Firefox

Let’s test with Firefox!

ubuntu@guiapps:~$ sudo apt install firefox
...
ubuntu@guiapps:~$ firefox 
Gtk-Message: Failed to load module "canberra-gtk-module"

We get a message that the GTK+ module is missing. Let’s close Firefox, install the module and start Firefox again.

ubuntu@guiapps:~$ sudo apt-get install libcanberra-gtk3-module
ubuntu@guiapps:~$ firefox

Here we are playing a Youtube music video at 1080p. It works as expected. The Firefox session is separated from the host’s Firefox.

Note that the theming is not exactly what you get with Ubuntu. This is due to the container being so lightweight that it does not have any theming support.

The screenshot may look a bit grainy; this is due to some plugin I use in WordPress that does too much compression.

You may notice that no menubar is showing. Just like with Windows, simply press the Alt key for a second, and the menu bar will appear.

Testing with Chromium

Let’s test with Chromium!

ubuntu@guiapps:~$ sudo apt install chromium-browser
ubuntu@guiapps:~$ chromium-browser
Gtk-Message: Failed to load module "canberra-gtk-module"

So, chromium-browser also needs a libcanberra package, and it’s the GTK+ 2 package.

ubuntu@guiapps:~$ sudo apt install libcanberra-gtk-module
ubuntu@guiapps:~$ chromium-browser

There is no menubar and there is no easy way to get to it. The menu on the top-right is available though.

Testing with Chrome

Let’s download Chrome, install it and launch it.

ubuntu@guiapps:~$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
...
ubuntu@guiapps:~$ sudo dpkg -i google-chrome-stable_current_amd64.deb
...
Errors were encountered while processing:
 google-chrome-stable
ubuntu@guiapps:~$ sudo apt install -f
...
ubuntu@guiapps:~$ google-chrome
[11180:11945:0503/222317.923975:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[11180:11945:0503/222317.924441:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.EnumerateDevices: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
^C
ubuntu@guiapps:~$ sudo apt install upower
ubuntu@guiapps:~$ google-chrome

There are these two errors regarding UPower and they go away when we install the upower package.

Creating shortcuts to the container apps

If we want to run Firefox from the container, we can simply run

$ lxc exec guiapps -- sudo --login --user ubuntu firefox

and that’s it.

To make a shortcut, we create the following file on the host,

$ cat > ~/.local/share/applications/lxd-firefox.desktop[Desktop Entry]
Version=1.0
Name=Firefox in LXD
Comment=Access the Internet through an LXD container
Exec=/usr/bin/lxc exec guiapps -- sudo --login --user ubuntu firefox %U
Icon=/usr/share/icons/HighContrast/scalable/apps-extra/firefox-icon.svg
Type=Application
Categories=Network;WebBrowser;
^D
$ chmod +x ~/.local/share/applications/lxd-firefox.desktop

We need to make it executable so that it gets picked up and we can then run it by double-clicking.

If it does not appear immediately in the Dash, use your File Manager to locate the directory ~/.local/share/applications/

This is how the icon looks like in a File Manager. The icon comes from the high-contrast set, which now I remember that it means just two colors 🙁

Here is the app on the Launcher. Simply drag from the File Manager and drop to the Launcher in order to get the app at your fingertips.

I hope the tutorial was useful. We explain the commands in detail. In a future tutorial, we are going to try to figure out how to automate these!


Ubuntu Podcast from the UK LoCo: S10E09 – Elfin Moaning Wine - Ubuntu Podcast

$
0
0

This week we’ve been teaching kids to program, tinkering with GNOME and Microsoft released Windows 10 S. Intel have a security vulnerability in it’s Active Management Technology and Google have EOL’d all their Nexus devices.

It’s Season Ten Episode Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Valorie Zimmerman: Google Summer of Code students are announced today

$
0
0
Google Summer of Code students are announced today! The KDE community is happy to welcome our new students, who will be coding for Cantor, Digikam, Frameworks, Gcompris, Kdevelop, Kopete, Krita, Kstars, Labplot, Marble, Minuet, Plasma, and Wikitolearn (alphabetical order, not in order of importance).

For Cantor, Rishabh Gupta will "Port all backends of Cantor to Q/K process." For Digikam, Yingjie Liu will make “Face Management Improvements,"Ahmed Fathy will enable "Database export to remote network devices using DLNA/UPNP,"Swati Lodha will create "Database separation for Similarity" and Shaza Ismail Kaoud will make a "Healing clone tool for dust spots removal."

In Frameworks, Chinmoy will enable "Polkit Support in KIO." Gcompris has two students working with the same project title, but will be doing different independent tasks. Divyam Madaan and Rudra Nil Basu will both be "Finishing started activities for GCompris in Qt-Quick." In Kdevelop, Emma Gospodinova will provide "Rust support for KDevelop" while Mikhail Ivchenko will give us "Go Language support in KDevelop."

Kopete has two students this year; Vijay Krishnavanshi will create a "Testing interface for Kopete and Improvement of protocol support" and Paulo Lieuthier will make "Chat history improvements." Krita has four students; Alexey Kapustin providing "Telemetry for getting statistics for which features are used the most in Krita,"Grigory Tantsevov"A Procedural Watercolor Brush Engine for Krita,"Eliakin Costa will "Develop a showcase of Krita's new scripting support" and Aniketh Girish"Integrate with share.krita.org."

In Kstars Csaba Kertesz (kecsap) will "Improve stability, testing and bring modern C++ to KStars." Labplot's Fábián Kristóf will begin "Adding support for plotting of real-time data in LabPlot." Marble: Mohammed Nafees (mnafees) will work on "Marble Indoor Maps" and Bartha Judit (Bernkastel)"Marble Material Maps." Minuet's Ștefan Toncu (StefanT) will create a "Multiple-Instrument View Framework." For Plasma, Lukas Hetzenecker will "Make High-DPI awesome" and Atul Sharma will be "Migrating to Kirigami (Koko)."

Finally, Wikitolearn has three students for 2017. Davide Riva will work on "Chat Bridge,"Vasudha Mathur will "Stabilize and ship Ruqola" and Cristian Baldi will make a "Progressive Web App for WikiToLearn."

KDE Student Programs thanks all these students for their fine work so far, and the mentors and teams who are already helping these new KDE developers fix bugs and improve our codebase, documentation, testing, and quality. We're really looking forward to working with all of you as we prepare for the coding period, which begins May 30. Look out for the student blogs and posts on the Planet and mail lists, welcome them and help them as you are able, now during the "community bonding period."

Ubuntu Insights: Canonical’s support for Kubernetes 1.6.2 released!

$
0
0

We’re proud to announce support for Kubernetes 1.6.2 in the Canonical Distribution of Kubernetes and the Kubernetes Charms. This is a pure upstream distribution of Kubernetes, built with operators in mind. It allows operators do deploy, manage, and operate Kubernetes on public clouds, on-premise (ie vSphere, OpenStack), bare metal, and developer laptops. Kubernetes 1.6.2 is a patch release comprised of mostly bugfixes.

Getting Started

Here’s the simplest way to get a Kubernetes 1.6.2 cluster up and running:

# linux
 sudo snap install conjure-up --classic
 conjure-up kubernetes

# macOS
 brew install conjure-up
 conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

How to upgrade

To upgrade an existing 1.5.x or 1.6.x cluster, follow the upgrade instructions in the docs. Following these instructions will upgrade the charm code and resources to the Kubernetes 1.6.2 release of the charms.

New Features

  • Support for Kubernetes v1.6.2.
  • Update kubernetes-e2e charm to use snaps – pr:45044
  • Add namespace-{list, create, delete} actions to the kubernetes-master layer – pr:44277
  • Add cifs-utils package to kubernetes-worker (required for Azure) – pr:45117, fixes:227
  • Document NodePort networking for CDK – pr:44863, fixes:259

Bug Fixes

How to contact us

We’re normally found in these Slack channels and attend these sig meetings regularly:

Or via email: kuberentes@ubuntu.com

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome!

Ubuntu Insights: Discord is now available as a snap for Ubuntu and other distributions

$
0
0

There’s a new desktop snap in the Snap store: Discord.

Ever heard of Discord?

Within 1.5 years of its launch Discord has become an almost mandatory tool for gamers. Adoption has been wild, from streaming to a Twitch account to voice calls to sync up on gaming tactics. But Discord can also be used as a VoIP replacement and has been praised for the crystal clear quality of its audio calls.

To install Discord as a snap:

sudo apt install snapd-xdg-open
sudo snap install discord

Just like the user growth has been amazing, the technology behind Discord is rather exciting. In particular they get their backend to work hard to make the load on the clients as light as possible. But Discord’s approach on the client is not too dissimilar to what you would find on Google Hangout or Skype. Just like them, they use webRTC to do voice or video communication. Just like Skype they package their application using Electron, the web framework to build cross-platform applications.

So why does it make sense to have Discord packaged as a snap? Snaps mean simple installation and update management with no need to worry about dependencies. It also means that when the software vendors make them available, it’s easier to access the beta version of their app or even daily builds.

The “latest and greatest” release everywhere

For app developers snapping your Electron applications for Linux users means building one snap that works on all the major Linux distributions, with support for more distributions growing at the time. User install documentation can be simplified and your application will be discoverable by millions of Linux users in the Software Center.

Application developers are in complete control of the publishing and release of their software, it drastically simplifies support as they can control the version of the app being consumed. Once a snap is installed, it will automatically be kept up to date, with install metrics available from the snap store. No more having to maintain old versions or asking users to update first before reporting bugs.

Wondering what Discord looks like on some other snap enabled distributions? Here you go:

The Discord snap running on openSUSE Leap 42.2.

The Discord snap running on Fedora 25.

Sean Davis: My Xubuntu Customization Guide

$
0
0
Show Your Desktop Friday used to be pretty popular in open source groups, but it’s popularity has declined in recent months.  Let’s fix that. Configuration Desktop Distribution: Xubuntu 17.04 Desktop Environment: Xfce Window Manager: Xfwm4 Dock: Plank Themes: GTK: Adwaita (included with GTK+ 3) Window Manager: Greybird-accessibility (GitHub, Launchpad) Icons: ePapirus (GitHub, PPA) Cursors: Breeze Snow (Git, Launchpad) Plank: Greybird (GitHub) Fonts: Default: Fira Sans … Continue reading My Xubuntu Customization Guide

Mathieu Trudel: Quick and easy network configuration with Netplan

$
0
0
Earlier this week I uploaded netplan 0.21 in artful, with SRUs in progress for the stable releases. There are still lots of features coming up, but it's also already quite useful. You can already use it to describe typical network configurations on desktop and servers, all the way to interesting, complicated setups like bond over a bridge over multiple VLANs...

Getting started

The simplest netplan configuration might look like this:

# Let NetworkManager manage all devices on this system
network:
  version: 2
  renderer: NetworkManager
At boot, netplan will see this configuration (which happens to be installed already on all new systems since 16.10) and generate a single , empty file: /run/NetworkManager/conf.d/10-globally-managed-devices.conf. This tells the system that NetworkManager is the only renderer for network configuration on the system, and will manage all devices by default.

Working from there: a simple server

Let's look at it on a hypothetical web server; such as for my favourite test: www.perdu.com.

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
This incredibly simple configuration tells the system that the eth0 device is to be brought up using DHCP4. Netplan also supports DHCPv6, as well as static IPs, setting routes, etc.


Building up to something more complex

Let's say I want a team of two NICs, and use them to reach VLAN 108 on my network:

            network:
              version: 2
              ethernets:
                eth0:
                  dhcp4: n
                eth1:
                  mtu: 1280
                  dhcp4: n
              bonds:
                bond0:
                  interfaces:
                  - eth1
                  - eth0
                  mtu: 9000
              vlans:
                bond0.108:
                  link: bond0
                  id: 108

I think you can see just how simple it is to configure even pretty complex networks, all in one file. The beauty in it is that you don't need to worry about what will actually set this up for you.

A choice of backends

Currently, netplan supports either NetworkManager or systemd-networkd as a backend. The default is to use systemd-networkd, but given that it does not support wireless networks, we still rely on NetworkManager to do just that.

This is why you don't need to care what supports your config in the end: netplan abstracts that for you. It generates the required config based on the "renderer" property, so that you don't need to know how to define the special device properties in each backend.

As I mentioned previously, we are still hard at work adding more features, but the core is there: netplan can set up bonds, bridges, vlans, standalone network interfaces, and do so for both static or DHCP addresses. It also supports many of the most common bridge and bond parameters used to tweak the precise behaviour of bonded or bridged devices.


Coming up...

I will be adding proper support for setting a "cloned" MAC on a device. I'm reviewing the code already to do this, and ironing out the last issues.

There are also plans on better handling administrative states for devices; along with a few bugs that relate to support MaaS, where having a simple configuration style really shines.

I'm really excited for where netplan is going. It seems like it has a lot of potential to address some of the current shortcomings in other tools. I'm also really happy to hear of stories of how it is being used in the wild, so if you use it, don't hesitate to let me know about it!

Contributing

All of the work on netplan happens on Launchpad. Its source code is at https://code.launchpad.net/netplan; we always welcome new contributions.

Costales: A new hope for Ubuntu Phone: The community

$
0
0
Well, I have to say that Ubuntu Phone was dead for me after the Mark's announcement a few weeks ago. I even posted the end of uNav and I switched to Android.  But my post was a trigger for myself: because the community will not allow uNav to die so easily and of course, the Ubuntu Phone :) You opened my eyes mates!

Do you have an Ubuntu Phone/Tablet?

Please, follow these steps: https://open.uappexplorer.com/docs then, you'll have a new Store and your device will receive community updates after Canonical closes the current (~June 2017).

And a new uNav release

Yes, you'll find the new release of uNav 0.70 in the OpenStore :)) I fixed important API issues (uNav will only work with the OpenStore version after June 1).

uNav in openStore


enjoy your Ubuntu Phone (again) | enjoy uNav (again) | enjoy the freedom (always)



Colin King: Simple job scripting in stress-ng 0.08.00

$
0
0
The latest release of stress-ng 0.08.00 now contains a new job scripting feature. Jobs allow one to bundle up a set of stress options  into a script rather than cram them all onto the command line.  One can now also run multiple invocations of a stressor with the latest version of stress-ng and conbined with job scripts we now have a powerful way of running more complex stress tests.

The job script commands are essentially the stress-ng long options without the need for the '--' option characters.  One option per line is allowed.

For example:

 $ stress-ng --cpu 1 --matrix 1 --verbose --tz --timeout 60s --cpu 1 --matrix -1 --icache 1 

would become:

 $cat example.job  
verbose
tz
timeout 60
cpu 1
matrix 1
icache 1

One can also add comments using the # character prefix.   By default the stressors will be run in parallel, but one can use the "run sequential" command in the job script to run the stressors sequentially.

The following script runs the mmap stressor multiple times using more memory on each run:

 $ cat mmap.job  
run sequential # one job at a time
timeout 2m # run for 2 minutes
verbose # verbose output
#
# run 4 invocations and increase memory each time
#
mmap 1
mmap-bytes 25%
mmap 1
mmap-bytes 50%
mmap 1
mmap-bytes 75%
mmap 1
mmap-bytes 100%

Some of the stress-ng stressors have various "methods" that allow one to modify the way the stressor behaves.  The following example shows how job scripts can be uses to exercise a system using different stressor methods:

 $ cat /usr/share/stress-ng/example-jobs/matrix-methods.job   
#
# hot-cpu class stressors:
# various options have been commented out, one can remove the
# proceeding comment to enable these options if required.
#
# run the following tests in parallel or sequentially
#
run sequential
# run parallel
#
# verbose
# show all debug, warnings and normal information output.
#
verbose
#
# run each of the tests for 60 seconds
# stop stress test after N seconds. One can also specify the units
# of time in seconds, minutes, hours, days or years with the suf‐
# fix s, m, h, d or y.
#
timeout 1m
# tz
# collect temperatures from the available thermal zones on the
# machine (Linux only). Some devices may have one or more thermal
# zones, where as others may have none.
tz
#
# matrix stressor with examples of all the methods allowed
#
# start N workers that perform various matrix operations on float‐
# ing point values. By default, this will exercise all the matrix
# stress methods one by one. One can specify a specific matrix
# stress method with the --matrix-method option.
#
#
# Method Description
# all iterate over all the below matrix stress methods
# add add two N × N matrices
# copy copy one N × N matrix to another
# div divide an N × N matrix by a scalar
# hadamard Hadamard product of two N × N matrices
# frobenius Frobenius product of two N × N matrices
# mean arithmetic mean of two N × N matrices
# mult multiply an N × N matrix by a scalar
# prod product of two N × N matrices
# sub subtract one N × N matrix from another N × N matrix
# trans transpose an N × N matrix
#
matrix 0
matrix-method all
matrix 0
matrix-method add
matrix 0
matrix-method copy
matrix 0
matrix-method div
matrix 0
matrix-method frobenius
matrix 0
matrix-method hadamard
matrix 0
matrix-method mean
matrix 0
matrix-method mult
matrix 0
matrix-method prod
matrix 0
matrix-method sub
matrix 0
matrix-method trans

Various example job scripts can be found in /usr/share/stress-ng/example-job, one can use these as a base for writing more complex stressors.  The example jobs have all the options commented (using the text from the stress-ng manual) to make it easier to see how each stressor can be run.

Version 0.08.00 landed in Ubuntu 17.10 Artful Aardvark and is available as a snap and I've got backports in ppa:colin-king/white for older releases of Ubuntu.

Eric Hammond: Rewriting TimerCheck.io In Python 3.6 On AWS Lambda With Chalice

$
0
0

If you are using and depending on the TimerCheck.io service, please be aware that the entire code base will be swapped out and replaced with new code before the end of May, 2017.

Ideally, consumers of the TimerCheck.io API will notice no changes, but if you are concerned, you can test out the new implementation using this temporary endpoint: https://new.timercheck.io/

For example:

https://new.timercheck.io/YOURTIMERNAME/60

and

https://new.timercheck.io/YOURTIMERNAME

This new endpoint uses the same timer database, so all timers can be queried and set using either endpoint.

At some point before the end of May, the new code will be activated by the standard https://timercheck.io endpoint.

Rationale

When the TimerCheck.io service was built two years ago, the only language supported by AWS Lambda was NodeJS 0.10. The API Gateway service was console only, and quite painful to set up.

It is two years later, and Amazon is retiring NodeJS 0.10. AWS Lambda functions written with this language version will stop working at the end of May (a 1 month extension from the original April deadline).

Though AWS Lambda now supports NodeJS 6.10, I decided to completely rewrite the code for TimerCheck.io in Python 3.6, for which support was just announced.

I’ve also been wanting to try out chalice for a long time now. Since TimerCheck.io uses API Gateway and AWS Lambda, this was the perfect opportunity, especially since chalice now also supports Python 3.6.

Though I ran into a few issues trying to get chalice stages and environment variables to work, the project went far easier than the initial implementation, and I am happy with the result.

Results

The chalice software makes creating APIs with Python a pleasure.

These four lines of code are an example of how easy it is to define an API with chalice. This is how the timer set API is defined.

from chalice import Chalice
app = Chalice(app_name='timercheck')

@app.route('/{timer}/{count}')
def set_timer(timer, count):
    [...]

The biggest benefit is that chalice takes care of all of the API Gateway hassles.

After a chalice deploy, all I had to do to make this production worthy was:

  • Create an ACM certificate

  • Point an API Gateway custom domain at the chalice-created API Gateway stage, using the certificate.

  • Add the host record to DNS in Route 53 for the resulting API Gateway CloudFront distribution.

The entire new source for the TimerCheck.io service is available in the timercheck repository on GitHub.

Original article and comments: https://alestic.com/2017/05/timercheck-aws-chalice/

Daniel Pocock: Visiting Kamailio World (Sold Out) and OSCAL'17

$
0
0

This week I'm visiting Kamailio World (8-10 May, Berlin) and OSCAL'17 (13-14 May, Tirana).

Kamailio World

Kamailio World features a range of talks about developing and using SIP and telephony applications and offers many opportunities for SIP developers, WebRTC developers, network operators and users to interact. Wednesday, at midday, there is a Dangerous Demos session where cutting edge innovations will make their first (and potentially last) appearance.

Daniel Pocock and Daniel-Constantin Mierla at Kamailio World, Berlin, 2017

OSCAL'17, Tirana

OSCAL'17 is an event that has grown dramatically in recent years and is expecting hundreds of free software users and developers, including many international guests, to converge on Tirana, Albania this weekend.

On Saturday I'll be giving a workshop about the Debian Hams project and Software Defined Radio. On Sunday I'll give a talk about Free Real-time Communications (RTC) and the alternatives to systems like Skype, Whatsapp, Viber and Facebook.

Ubuntu Insights: Video: Managed OpenStack upgrades

$
0
0

The process of upgrading OpenStack releases can be challenging, given the many moving parts and the cadence of releases.

In this use case video, Juan Carliante, Cloud Reliability Engineer for Canonical explains how Canonical’s BootStack  Engineering team solves issues arising during the upgrade process quickly using Juju, an open-source applications modelling tool.

Contact us to find out more about BootStack and how our engineering team can help your business.

 

Martin Pitt: Cockpit is now just an apt install away

$
0
0

Cockpit has now been in Debian unstable and Ubuntu 17.04 and devel, which means it’s now a simple

$ sudo apt install cockpit

away for you to try and use. This metapackage pulls in the most common plugins, which are currently NetworkManager and udisks/storaged. If you want/need, you can also install cockpit-docker (if you grab docker.io from jessie-backports or use Ubuntu) or cockpit-machines to administer VMs through libvirt. Cockpit upstream also has a rather comprehensive Kubernetes/Openstack plugin, but this isn’t currently packaged for Debian/Ubuntu as kubernetes itself is not yet in Debian testing or Ubuntu.

After that, point your browser to https://localhost:9090 (or the host name/IP where you installed it) and off you go.

What is Cockpit?

Think of it as an equivalent of a desktop (like GNOME or KDE) for configuring, maintaining, and interacting with servers. It is a web service that lets you log into your local or a remote (through ssh) machine using normal credentials (PAM user/password or SSH keys) and then starts a normal login session just as gdm, ssh, or the classic VT logins would.

Login screenSystem page

The left side bar is the equivalent of a “task switcher”, and the “applications” (i. e. modules for administering various aspects of your server) are run in parallel.

The main idea of Cockpit is that it should not behave “special” in any way - it does not have any specific configuration files or state keeping and uses the same Operating System APIs and privileges like you would on the command line (such as lvmconfig, the org.freedesktop.UDisks2 D-Bus interface, reading/writing the native config files, and using sudo when necessary). You can simultaneously change stuff in Cockpit and in a shell, and Cockpit will instantly react to changes in the OS, e. g. if you create a new LVM PV or a network device gets added. This makes it fundamentally different to projects like webmin or ebox, which basically own your computer once you use them the first time.

It is an interface for your operating system, which even reflects in the branding: as you see above, this is Debian (or Ubuntu, or Fedora, or wherever you run it on), not “Cockpit”.

Remote machines

In your home or small office you often have more than one machine to maintain. You can install cockpit-bridge and cockpit-system on those for the most basic functionality, configure SSH on them, and then add them on the Dashboard (I add a Fedora 26 machine here) and from then on can switch between them on the top left, and everything works and feels exactly the same, including using the terminal widget:

Add remoteRemote terminal

The Fedora 26 machine has some more Cockpit modules installed, including a lot of “playground” ones, thus you see a lot more menu entries there.

Under the hood

Beneath the fancy Patternfly/React/JavaScript user interface is the Cockpit API and protocol, which particularly fascinates me as a developer as that is what makes Cockpit so generic, reactive, and extensible. This API connects the worlds of the web, which speaks IPs and host names, ports, and JSON, to the “local host only” world of operating systems which speak D-Bus, command line programs, configuration files, and even use fancy techniques like passing file descriptors through Unix sockets. In an ideal world, all Operating System APIs would be remotable by themselves, but they aren’t.

This is where the “cockpit bridge” comes into play. It is a JSON (i. e. ASCII text) stream protocol that can control arbitrarily many “channels” to the target machine for reading, writing, and getting notifications. There are channel types for running programs, making D-Bus calls, reading/writing files, getting notified about file changes, and so on. Of course every channel can also act on a remote machine.

One can play with this protocol directly. E. g. this opens a (local) D-Bus channel named “d1” and gets a property from systemd’s hostnamed:

$ cockpit-bridge --interact=---

{ "command": "open", "channel": "d1", "payload": "dbus-json3", "name": "org.freedesktop.hostname1" }
---
d1
{ "call": [ "/org/freedesktop/hostname1", "org.freedesktop.DBus.Properties", "Get",
          [ "org.freedesktop.hostname1", "StaticHostname" ] ],
  "id": "hostname-prop" }
---

and it will reply with something like

d1
{"reply":[[{"t":"s","v":"donald"}]],"id":"hostname-prop"}
---

(“donald” is my laptop’s name). By adding additional parameters like host and passing credentials these can also be run remotely through logging in via ssh and running cockpit-bridge on the remote host.

Stef Walter explains this in detail in a blog post about Web Access to System APIs. Of course Cockpit plugins (both internal and third-party) don’t directly speak this, but use a nice JavaScript API.

As a simple example how to create your own Cockpit plugin that uses this API you can look at my schroot plugin proof of concept which I hacked together at DevConf.cz in about an hour during the Cockpit workshop. Note that I never before wrote any JavaScript and I didn’t put any effort into design whatsoever, but it does work ☺.

Next steps

Cockpit aims at servers and getting third-party plugins for talking to your favourite part of the system, which means we really want it to be available in Debian testing and stable, and Ubuntu LTS. Our CI runs integration tests on all of these, so each and every change that goes in is certified to work on Debian 8 (jessie) and Ubuntu 16.04 LTS, for example. But I’d like to replace the external PPA/repository on the Install instructions with just “it’s readily available in -backports”!

Unfortunately there’s some procedural blockers there, the Ubuntu backport requestsuffers from understaffing, and the Debian stable backport is blocked on getting it in to testing first, which in turn is blocked by the freeze. I will soon ask for a freeze exception into testing, after all it’s just about zero risk - it’s a new leaf package in testing.

Have fun playing around with it, and please report bugs!

Feel free to discuss and ask questions on the Google+ post.

Viewing all 17727 articles
Browse latest View live