Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Ubuntu Insights: USB hotplug with LXD containers

$
0
0

LXD logo

USB devices in containers

It can be pretty useful to pass USB devices to a container. Be that some measurement equipment in a lab or maybe more commonly, an Android phone or some IoT device that you need to interact with.

Similar to what I wrote recently about GPUs, LXD supports passing USB devices into containers. Again, similarly to the GPU case, what’s actually passed into the container is a Unix character device, in this case, a /dev/bus/usb/ device node.

This restricts USB passthrough to those devices and software which use libusb to interact with them. For devices which use a kernel driver, the module should be installed and loaded on the host, and the resulting character or block device be passed to the container directly.

Note that for this to work, you’ll need LXD 2.5 or higher.

Example (Android debugging)

As an example which quite a lot of people should be able to relate to, lets run a LXD container with the Android debugging tools installed, accessing a USB connected phone.

This would for example allow you to have your app’s build system and CI run inside a container and interact with one or multiple devices connected over USB.

First, plug your phone over USB, make sure it’s unlocked and you have USB debugging enabled:

stgraber@dakara:~$ lsusb
Bus 002 Device 003: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 002: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 021: ID 17ef:6047 Lenovo 
Bus 001 Device 031: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 004: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 005: ID 046d:0a01 Logitech, Inc. USB Headset
Bus 001 Device 033: ID 0fce:51da Sony Ericsson Mobile Communications AB 
Bus 001 Device 003: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 002: ID 072f:90cc Advanced Card Systems, Ltd ACR38 SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Spot your phone in that list, in my case, that’d be the “Sony Ericsson Mobile” entry.

Now let’s create our container:

stgraber@dakara:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

And install the Android debugging client:

stgraber@dakara:~$ lxc exec c1 -- apt install android-tools-adb
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following NEW packages will be installed:
 android-tools-adb
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.2 kB of archives.
After this operation, 198 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 android-tools-adb amd64 5.1.1r36+git20160322-0ubuntu3 [68.2 kB]
Fetched 68.2 kB in 0s (0 B/s) 
Selecting previously unselected package android-tools-adb.
(Reading database ... 25469 files and directories currently installed.)
Preparing to unpack .../android-tools-adb_5.1.1r36+git20160322-0ubuntu3_amd64.deb ...
Unpacking android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...

We can now attempt to list Android devices with:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached

Since we’ve not passed any USB device yet, the empty output is expected.

Now, let’s pass the specific device listed in “lsusb” above:

stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce productid=51da
Device sony added to c1

And try to list devices again:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

To get a shell, you can then use:

stgraber@dakara:~$ lxc exec c1 -- adb shell
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
E5823:/ $

LXD USB devices support hotplug by default. So unplugging the device and plugging it back on the host will have it removed and re-added to the container.

The “productid” property isn’t required, you can set only the “vendorid” so that any device from that vendor will be automatically attached to the container. This can be very convenient when interacting with a number of similar devices or devices which change productid depending on what mode they’re in.

stgraber@dakara:~$ lxc config device remove c1 sony
Device sony removed from c1
stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce
Device sony added to c1
stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

The optional “required” property turns off the hotplug behavior, requiring the device be present for the container to be allowed to start.

More details on USB device properties can be found here.

Conclusion

We are surrounded by a variety of odd USB devices, a good number of which come with possibly dodgy software, requiring a specific version of a specific Linux distribution to work. It’s sometimes hard to accommodate those requirements while keeping a clean and safe environment.

LXD USB device passthrough helps a lot in such cases, so long as the USB device uses a libusb based workflow and doesn’t require a specific kernel driver.

If you want to add a device which does use a kernel driver, locate the /dev node it creates, check if it’s a character or block device and pass that to LXD as a unix-char or unix-block type device.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it
Original article


Sebastian Dröge: Writing GStreamer Elements in Rust (Part 4): Logging, COWs and Plugins

$
0
0

This is part 4, the older parts can be found here: part 1, part 2 and part 3

It’s been quite a while since the last update again, so I thought I should write about the biggest changes since last time again even if they’re mostly refactoring. They nonetheless show how Rust is a good match for writing GStreamer plugins.

Apart from actual code changes, also the code was relicensed from the LGPL-2 to a dual MIT-X11/Apache2 license to make everybody’s life a bit easier with regard to static linking and building new GStreamer plugins on top of this.

I’ll also speak about all this and more at RustFest.EU 2017 in Kiev on the 30th of April, together with Luis.

The next steps after all this will be to finally make the FLV demuxer feature-complete, for which all the base-work is already done now.

Logging

One thing that was missing so far and made debugging problems always a bit annoying was the missing integration with the GStreamer logging infrastructure. Adding println!() everywhere just to remove them again later gets boring after a while.

The GStreamer logging infrastructure is based, like many other solutions, on categories in which you log your messages and levels that describe the importance of the message (error, warning, info, …). Logging can be disabled at compile time, up to a specific level, and can also be enabled/disabled at runtime for each category to a specific level, and performance impact for disabled logging should be close to zero. This now has to be mapped somehow to Rust.

During last year’s “24 days of Rust” in December, slog was introduced (see this also for some overview how slog is used). And it seems like the perfect match here due to being able to implement new “output backends”, called a Drain in slog and very low performance impact. So how logging works now is that you create a Drain per GStreamer debug category (which will then create the category if needed), and all logging to that Drain goes directly to GStreamer:

// The None parameter is a GStreamer Element, which allows the logging system to
// print the element name and other things on the GStreamer side
// The 0 is for defining a color for the logging in that category
let logger = Logger::root(GstDebugDrain::new(None,
                                             "mycategory",
                                             0,
                                             "Some description"),
                                             None);
debug!(logger, "Some output with a number {}", 1);

With lazy_static we can then make sure that the Drain is only created once and can be used from multiple places.

All the implementation for the Drain can be found here, and it’s all rather straightforward plumbing. The interesting part here however is that slog makes sure that the message string and all its formatting arguments (the integer in the above example) are passed down to the Drain without doing any formatting. As such we can skip the whole formatting step if the category is not enabled or its level is too low, which gives us almost zero-cost logging for the cases when it is disabled. And of course slog also allows disabling logging up to a specific level at compile time via cargo’s features feature, making it really zero-cost if disabled at compile time.

Safe & simple Copy-On-Write

In GStreamer, buffers and similar objects are inheriting from a base class called GstMiniObject. This base class provides infrastructure for reference counting, copying (cloning) of the objects and a dynamic (at runtime, not to be confused with Rust’s COW type) Copy-On-Write mechanism (writable access requires a reference count of 1, or a copy has to be made). This is very similar to Rust’s Arc, which for a contained type that implements Clone provides the make_mut() and get_mut() functions that work the same way.

Now we can’t unfortunately use Arc directly here for wrapping the GStreamer types, as the reference counting is already done inside GStreamer and adding a second layer of reference counting on top is not going to make things work better. So there’s now a GstRc, which provides more or less the same API as Arc and wraps structs that implement the GstMiniObject trait. The latter provides GstRc functions for getting the raw pointer, swap the raw pointer and create new instances from a raw pointer. The actual structs for buffers and other types don’t do any reference counting or otherwise instance handling, and only have unsafe constructors. The general idea here is that they will never exist outside a GstRc, which will then can provide you with (mutable or not) references to them.

With all this we now have a way to let Rust do the reference counting for us and enforce the writability rules of the GStreamer API automatically without leaving any chance of doing things wrong. Compared to C where you have to do the reference counting yourself and could accidentally try to modify a non-writable (reference count > 1) object (which would give an assertion), this is a big improvement.

And as a bonus this is all completely without overhead: all that is passed around in the Rust code is (once compiled) the raw C pointer of the objects, and the functions calls directly map to the C functions too. Let’s take an example:

// This gives a GstRc
let mut buffer = Buffer::new_from_vec(vec![1, 2, 3, 4]).unwrap();

{ // A new block to keep the &mut Buffer scope (and mut borrow) small
  // This would fail (return None) if the buffer was not writable
  let buffer_ref = buffer.get_mut().unwrap();
  buffer_ref.set_pts(Some(1));
}

// After this the reference count will be 2
let mut buffer_copy = buffer.clone();

{
  // buffer.get_mut() would return None, the below creates a copy
  // of the buffer instead, which makes it writable again
  let buffer_copy_ref = buffer.make_mut().unwrap();
  buffer_copy_ref.set_pts(Some(2));
}

// Access to Buffer functions that only require a &mut Buffer can
// be done directly thanks to the Deref trait
assert_ne!(buffer.get_pts(), buffer_copy.get_pts());

After reading this code you might ask why DerefMut is not implemented in addition, which would then do make_mut() internally if needed and would allow getting around the extra method call. The reason for this is that make_mut() might do a (expensive!) copy, and as such DerefMut could do a copy implicitly without the code having any explicit indication that a copy might happen here. I would be worried that it could cause non-obvious performance problems.

The last change I’m going to write about today is that the repository was completely re-organized. There is now a base crate and separate plugin crates (e.g. gst-plugin-file). The former is a normal library crate and contains some C code and all the glue between GStreamer and Rust, the latter don’t contain a single line of C code (and no unsafe code either at this point) and compile to a standalone GStreamer plugin.

The only tricky bit here was generating the plugin entry point from pure Rust code. GStreamer requires a plugin to export a symbol with a specific name, which provides access to a description struct. As the struct also contains strings, and generating const static strings with ‘\0’ terminator is not too easy, this is still a bit ugly currently. With the upcoming changes in GStreamer 1.14 this will become better, as we can then just export a function that can dynamically allocate the strings and return the struct from there.

All the boilerplate for creating the plugin entry point is hidden by the plugin_define!() macro, which can then be used as follows (and you’ll understand what I mean with ugly ‘\0’ terminated strings then):

plugin_define!(b"rsfile\0",
               b"Rust File Plugin\0",
               plugin_init,
               b"1.0\0",
               b"MIT/X11\0",
               b"rsfile\0",
               b"rsfile\0",
               b"https://github.com/sdroege/rsplugin\0",
               b"2016-12-08\0");

As a side-note, handling multiple crates next to each other is very convenient with the workspace feature of cargo and the “build –all”, “doc –all” and “test –all” commands sine 1.16.

Ubuntu Podcast from the UK LoCo: S10E04 – Sulky Meek Work - Ubuntu Podcast

$
0
0

We discuss the best open source operating system on the planet, the mighty Ubuntu MATE! One of us went to a Devin Townsend Project gig, Maps.me is our GUI love and we discuss ways to upgrade from Ubuntu 12.04 to 16.04.

It’s Season Ten Episode Four of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been upto recently:
  • We interview Martin Wimpress, the charismatic (and devilishly handsome) lead developer of Ubuntu MATE about how it has taken the world by storm and remains the best open source operating system implementation available today, leaving all others in the dust. Well, maybe not all the others but definitely better than openSUSE. Isn’t everything though?
    • Yes, Martin writes the show notes ;-)
  • We share a GUI Lurve:
    • Maps.Me– An open source maps an navigation app using OpenStreetMap data.
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Bryan Quigley: Juju’s localhost LXD now works with offline images

$
0
0

Some environments require no direct Internet access.   Previously to Juju 2.1.x it wasn’t possible to use Juju locally with LXD without the Internet.

Prereq: Setup Juju 2.1.x and LXD however you usually do in the environment

  1. Get an LXD importable image and move to the offline machine
    wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-lxd.tar.xz https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.xz
  2. Import the image and assign it an alias so Juju knows to use it
    lxc image import xenial-server-cloudimg-amd64-lxd.tar.xz xenial-server-cloudimg-amd64-root.tar.xz --alias juju/xenial/amd64
  3. It’s a good idea to confirm that LXD can launch the image fine
    lxc launch juju/xenial/amd64
  4. Bootstrap and start deploying charms
    juju bootstrap localhost

This is just one part of running offline.   This may only work if you have a local package mirror that the LXD image will be able to detect as it does need to install some packages.

Additionally,  some charms may download software directly from Internet sites so those would also need more workarounds for them.

Fixed bug: https://bugs.launchpad.net/juju/+bug/1650651

Ubuntu Insights: Snaps are now available for Ubuntu 14.04 LTS desktop and server

$
0
0

The snapd team recently announced a new release of snapd supporting Ubuntu 14.04 LTS (Trusty) for servers and desktop (i386, amd64). The snapd service is what makes possible the installation and management of applications packaged as snaps.

In a nutshell, if you have systems using Ubuntu 14.04 LTS, welcome to a brand new world!

How to install snapd?

Installation on Ubuntu 14.04 is straightforward. Firstly, open a terminal, and type the following command to install snapd:

$ sudo apt install snapd 

You can now download and run any application from the Ubuntu snap store, let’s start by installing a hello-world snap!

$ sudo snap install hello-world 

The first time you install a snap, snapd also installs the “core” snap, which is the common platform for all other snaps. After the installation, you can use the “snap list” command to list installed snaps:

$ snap list
Name         Version  Rev   Developer  Notes
core         16-2     1441  canonical  -
hello-world  6.3      27    canonical  - 

Then, in order to run “hello-world”:

$ hello-world 

However, the first time you try to run a snap on 14.04, you might experience the error “The command cannot be found”. This is due to the fact that “/snap/bin” has not been added to the $PATH environment variable. To do so, a script called “apps-bin-path.sh” has been added to /etc/profile.d. It will automatically run after you reboot or re-login and add snaps launchers to your path. When this is done, snaps should launch as expected:

$ hello-world
Hello World! 

Hooray! The “hello-world” snap is running on your Ubuntu 14.04 machine. To further explore snaps, you can use the “snap find” command to search for more.

Without any arguments, it will show featured snaps:
Command output edited to fit the blog width.

$ snap find
Name               Version   Developer   Notes  Summary
docker             1.11.2-9  docker-inc  -      The docker app de...
lxd                2.12      canonical   -      System container m...
mongo32            3.2.7     niemeyer    -      MongoDB document-o...
rocketchat-server  0.54.2    rocketchat  -      Group chat server...

It can also be used as a search engine for all stable snaps:

$ snap find 3d
Name                 Version      Developer     Notes  Summary
blender-tpaw         2.78c        tpaw          -      The free a...
cloudcompare         2.8.1-1      cloudcompare  -      3D point clo...
mvs-texturing-mardy  20170215-1   mardy         -      MVS Textur...
[...] 

As you can see, the “snap” command is used to manage snaps the same way you use “apt” to manage debs. You can learn more about it by following this short tutorial (the first two steps are about installing snapd on various distros, you can skip them!). Of course, the built-in help is available through “snap –help”.

A few snaps to try

Krita : a free and open source digital painting application. Snaps allow Krita developers to release their software directly to users, at their own pace, regardless of OS release schedules.

$ sudo snap install krita

PostgreSQL 93, 94, 95 and 96: since snaps are confined, you can install multiple PostreSQL versions at the same time, which comes in handy for testing.

$ sudo snap install postgresql96

Nextcloud and Wekan: No need to present Nextcloud for file sharing, but maybe you don’t know about Wekan: a kanban boards server similar to Trello. By installing these snaps you can spin up instances of complex collaboration software in minutes.

$ sudo snap install nextcloud
$ sudo snap install wekan-ondra

openHAB: turn any system into a home automation backend, in a minute.

$ sudo snap install openhab

Ubuntu Make: deploy and setup developer environments easily on Ubuntu (Android, Unity3D, Arduino, Swift, etc.). As this snap requires full access to your system, it’s only installable in “classic” mode, which more or less means “unconfined”.

$ sudo snap install --classic ubuntu-make

To get some insight on available or installed snaps, the “snap info” command will give you everything you need, such as commands provided by the snap:

$ snap info ubuntu-make
name:      ubuntu-make
summary:   "Setup your development environment on ubuntu easily"
publisher: 
description: |
  Ubuntu Make provides a set of functionality to setup,
  maintain and personalize your developer environment easily. It will handle
  all dependencies, even those which aren't in Ubuntu itself, and install
  latest versions of the desired and recommended tools.
  .
  This is the latest master from ubuntu make, freshly built from
  https://github.com/ubuntu/ubuntu-make. It may contain even unreleased
  features!
  
commands:
  - ubuntu-make.umake
tracking:    stable
installed:   master (x1) 13MB classic
refreshed:   2017-03-29 10:47:09 +0200 CEST
channels:                
  stable:    master (17) 17MB classic
  candidate: master (17) 17MB classic
  beta:      master (17) 17MB classic
  edge:      master (17) 17MB classic

Next steps

To browse all the available stable snaps in the store, you can visit uappexplorer, use the “snap find” command or install the “snapweb” snap and visit https://localhost:4201 for a local store interface.

If you want to snap your software and publish it, you can have a quickstart at tutorials.ubuntu.com and dive-in further with the snapcraft documentation.

If you have any questions, get in touch with the snapcraft team on Rocket.Chat and on the snapcraft mailing-list.

Nish Aravamudan: iSCSI initiator names in cloud-images

$
0
0

I recently worked on a fix for LP: #144499 in Ubuntu’s cloud images where every instance (VM or LXD container) using a given cloud image would end up sharing the iSCSI initiator name. The iSCSI initiator name is intended to be unique, so that you can not only uniquely identify which system is using a given target on the iSCSI server, but also, if desired, restrict which initiators can use which targets.

This behavioral change was introduced by the fix for LP: #1057635; which was working around a different issue with initiator names by re-instituting an older behavior in Ubuntu. In effect, the open-iscsi package can either configure the iSCSI initiator name at install time or at boot time. This generation is controlled by a helper script seeing GenerateName=Yes in /etc/iscsi/initiatorname.iscsi; if it does see that string, then it generates a new unique initiator name using another helper. Ideally, this would be done at first-boot time (by the helper script), however if iSCSI is used for the root device, the initramfs will not contain a valid initiator name and will fail to find the root iSCSI disk. So, 1057635 re-instituted the prior Ubuntu behavior that the initiator name is created when the open-iscsi package is installed.

In order for iSCSI root to work, though, open-iscsi needs to be pre-installed in the installer environment (“seeded” in Ubuntu parlance), or if using images, needs to be installed by default so the image can use iSCSI. That results in processes like the CPC cloud image creation installing the open-iscsi package during the image creation. But that installation ends up creating an initiator name due to the prior bug. And thus every instance using that image has the same initiator name! To fix this, at least somewhat, I added a hook to the CPC image generation which, if it detects that /etc/iscsi/initiatorname.iscsi exists, overwrites it with GenerateName=yes. Thus, on the start of any instance using that cloud image, a new unique initiator name will be used.

Scott Moser (smoser), though, pointed out this “fix” is not quite complete. If you start with a cloud image and then make a snapshot, or a local image, from a running instance — all new instances using that local image will end up sharing an initiator name. This is actually relatively tricky to figure out — what we want is every unique instance to get a unique initiator name, not every image. I’m going to be trying to work out this issue on LP: #1677726. I probably will need to configure an iSCSI root and boot setup at home first 🙂


Ross Gammon: Resurrecting my old Drupal Site

$
0
0

As I have previously blogged, I recently managed to resurrect my old Drupal site that ran in the Amazon AWS cloud, and get it working again on a new host. I have just written up a summary of how I battled through the process, which can be found here.

Unfortunately, I took a long time to write it up. So it is not as detailed as I originally intended. But if like me you run a Drupal site, or you did and it is also broken, then feel free to follow the link for a read. It may at least give some ideas to follow up. I made heavy use of DrupalVM. If you are just starting out with a Drupal website, and you have more than FTP access to your hosting, I recommend using  DrupalVM (which is built with Vagrant & Ansible) for local development and testing.


Stuart Langridge: Enable Compiz in Ubuntu MATE to fix dragging the window borders

$
0
0

If you use Ubuntu MATE you may have found that it’s really difficult to resize windows; you can move your mouse pointer over the edge of a window to drag the window bigger or smaller, but the “grab area” is only one pixel wide. This is alaarmingly irritating.

It’s a long-standing bug (people were complaining about this in 2010, seven years ago!), which has been fixed in Ubuntu proper for a long time, but has resurfaced in Ubuntu MATE. Anyway, to fix resizing of windows, you need to enable Compiz in Ubuntu MATE; that replaces Ubuntu MATE’s standard “window manager”, which is named “Marco”, with the Compiz window manager, and Compiz doesn’t have this daft problem. Yes, it is annoying that this doesn’t just work, and apparently they’re working on fixing it so it doesn’t become our problem as users to fix the deficiencies, but in the interim you can at least fix it for yourself even though you shouldn’t have to.

So, open MATE Tweak, which is in the System menu (Preferences> Look and Feel> MATE Tweak), and under Windows, choose Compiz (Advanced GPU accelerated desktop effects) under Window Manager.

Technically, Compiz requires a 3d accelerated graphics card. However, unless your machine is very, very, very old indeed, it will have enough 3d acceleration to do this; this is not like playing games or similar. My ancient Dell laptop copes with it fine, so it should not be a worry.

Ubuntu MATE will then switch to Compiz (you don’t need to reboot or anything) and will show you a window saying “Keep this window manager?” If you see that window, you can click “Yes, OK” in it. (If for some reason this hasn’t worked, then you won’t see that window and so it will automatically switch back, so your computer isn’t broken.) Now, resizing Ubuntu MATE windows should be a lot easier, because the resize grab area will not be one single pixel.


Stuart Langridge: A walk along the arches

$
0
0

So, this evening, there were beers with Dan and Ebz and Charles and after a brief flirtation with the Alchemist (where we didn’t go, since the bar was three deep and they might be good at cocktails but they’re really, really slow at cocktails and so I didn’t fancy waiting fifteen minutes just to get to the bar) and the Bureau (who are also good at cocktails, and sell an interesting Czech lager with a picture of the brewery on the glass (and while I’m on the subject, why does every bar feel it necessary to sell me Beer X in a Beer X-branded glass these days? I really don’t care, bar staff of the world. Don’t feel like you have to)), we descended upon the Indian Brewery in their new place under the arches. Honestly, up to now, I’d tried their Birmingham Lager in cans (perfectly nice), and that was it; I’d never been to their bar. And it’s fabulous. I was vaguely peckish before getting there but I probably wouldn’t have bothered with anything; I flirted with the menu in Bureau but was basically ungrabbed by whatever it is that grabs me about menus. And then we piled out of the cab (which we’d got in to save Ebz and her tottery heels; obviously Sport Dan would have walked the distance to get there but I was quietly glad of not having to) into Indian and the delectable smell of the place completely turned me around. Twenty feet from the place I was mildly thinking about food; six feet from the door I was ready to eat a horse, and one of my companions, and possibly a road sign. This is a place that knows how to capitalise on the weaknesses of their punters. On the way in the door we we interrupted by a chap who, tactfully, hasn’t skipped many lunches, who asked us: are you eating? Yes, we all said, salivating. And this helpful chap cleared away a small end of a table — the Indian Brewery interior is organised as a batch of wooden benches, like the ones you get in a pub garden. So if you’re not eating, he’ll find you a space to sit in between others; if you are, he’ll find you a slightly larger block, possibyl by elbowing those already there. And the friendliness doesn’t end there. Everything, from the Bollywood posters on the wall to the attitude of the staff and the casual, no-frills nature of the layout exudes friendliness; it feels welcoming, like someone took the concept of welcomingness and distilled it out of the air into an atmosphere that pervades the whole place and stepping into it feels like a hot bath. And the Fat Naan is utterly delightful. “Are you sure you want a whole fat naan to yourself?” asked Dan. Yes. Yes, I was sure. And I was right. It was bloody delectable. Honestly, I can’t recommend it enough. Good beers (and a good selection of beers, moreover, which I wasn’t expecting), great Indian food, friendly staff. If you want more from a place, I can’t see what it is that you want.

Stéphane Graber: Using Wake on LAN with MAAS 2.x

$
0
0

Introduction

I maintain a number of development systems that are used as throw away machines to reproduce LXC and LXD bugs by the upstream developers. I use MAAS to track who’s using what and to have the machines deployed with whatever version of Ubuntu or Centos is needed to reproduce a given bug.

A number of those systems are proper servers with hardware BMCs on a management network that MAAS can drive using IPMI. Another set of systems are virtual machines that MAAS drives through libvirt.

But I’ve long had another system I wanted to get in there. That machine is a desktop computer but with a server grade SAS controller and internal and external arrays. That machine also has a Fiber Channel HBA and Infiniband card for even less common setups.

The trouble is that this being a desktop computer, it’s lacking any kind of remote management that MAAS supports. That machine does however have a good PCIe network card which provides reliable wake-on-lan.

Back in the days (MAAS 1.x), there was a wake-on-lan power type that would have covered my use case. This feature was however removed from MAAS 2.x (see LP: #1589140) and the development team suggests that users who want the old wake-on-lan feature, instead install Ubuntu 14.04 and the old MAAS 1.x branch.

Implementing Wake on LAN in MAAS 2.x

I am, however not particularly willing to install an old Ubuntu release and an old version of MAAS just for that one trivial feature, so I instead spent a bit of time to just implement the bits I needed and keep a patch around to be re-applied whenever MAAS changes.

MAAS doesn’t provide a plugin system for power types, so I unfortunately couldn’t just write a plugin and distribute that as an unofficial power type for those who need WOL. I instead had to resort to modifying MAAS directly to add the extra power type.

The code change needed to re-implement a wake-on-lan power type is pretty simple and only took me a few minutes to sort out. The patch can be found here: https://dl.stgraber.org/maas-wakeonlan.diff

To apply it to your MAAS, do:

sudo apt install wakeonlan
wget https://dl.stgraber.org/maas-wakeonlan.diff
sudo patch -p1 -d /usr/lib/python3/dist-packages/provisioningserver/ < maas-wakeonlan.diff
sudo systemctl restart maas-rackd.service maas-regiond.service

Once done, you’ll now see this in the web UI:

After selecting the new “Wake on LAN” power type, enter the MAC address of the network interface that you have WOL enabled on and save the change.

MAAS will then be able to turn the system on, allowing for the normal commissioning and deployment stages. For everything else, this power type behaves like the “Manual” type, asking the user to manually go shutdown or reboot the system as you can’t do that through Wake on LAN.

Note that you’ll have to re-apply part of the patch whenever MAAS is updated. The patch modifies two files and adds a new one. The new file won’t be removed during an upgrade, but the two modified files will get reverted and need patching again.

Conclusion

This is certainly a hack and if your system supports anything better than Wake on LAN, or you’re willing to buy a supported PDU just for that one system, then you should do that instead.

But if the inability to turn a system on is all that stands in your way from adding it to your MAAS, as was the case for me, then that patch may help you.

I hope that in time MAAS will either get that feature back in some way or get a plugin system that I can use to ship that extra power type in its own separate package without needing to alter any of MAAS’ own files.

Canonical Design Team: March’s reading list

$
0
0

Stuart Langridge: Podcasts I like

$
0
0

Aaron alerts me to the recent initiative of sharing one’s favourite podcasts with the hashtag #trypod. That sounds like fun, speaking as a podcast listener and performer. So, here’s the stuff I listen to, at the moment (it is April 2017).

The Dresden Files podcast

If you’re sitting about waiting for the next Harry Dresden book, you’ll like this. An excellent example of fandom; to someone outside the club they just ceaselessly hash over the books, but I think that’s good fun both to do and to listen to. And I’ve learned about quite a bit that I missed, as well.

Rocket

Rocket’s great. Very tech-focused, and the presenters skew a little more to the journalism side than other tech podcasts which tend to be more developer or sysadmin-based. If you like the stuff that I do, you’ll probably like this. Look here for commentary on what’s going on in the tech world, plus quite a lot of laughing at one another’s jokes.

The Ubuntu podcast

Admittedly made up of friends of mine, but that’s not the point. Amusing commentary on tech and open source stuff, with an Ubuntu slant, but these days they end up talking more about weird little hardware projects and Ubuntu MATE than what Canonical are up to. It’s fun; they try for a “family-friendly” sort of vibe as well. While on-air, at least.

West Wing Weekly

An episode-by-episode relisten-to and discussion-of The West Wing, the TV programme. This is fandom stuff, much like Dresden above, but one of the two presenters is John Malina who played Will Bailey in the show and so they tend to have lots of interviews with members of the cast, the writers, the directors, and so on.

Simply elementary

This covers elementary, the OS. They tend to go fairly in-depth on specific aspects or projects from the elementary team, although not always (they had me on once and were berated about various things, which I’m grateful for the chance to have done). Worth listening to if you’re part of the elementary community.

HTTP 203

The legendary and legendarily infrequent HTTP 203 podcast. Web development, right up at the cutting edge (sometimes considerably past the cutting edge and into the field on the other side of the road) from Jake Archibald and Paul Lewis of the Google Chrome developer relations team. About 20% amazing insight on what the web is and where it’s going, another 20% them describing interesting web things they’ve been up to, and the remainder off-colour stories and arsing about, which is marvellous stuff. If you’re a web dev and you’re not listening to this I don’t know what’s wrong with you.

More or Less: Behind the Stats

From the BBC. A short but frequent programme in which they do a deep-dive into some quoted or reported bit of statistics and explain whether it’s right and what it means. I’ve learned quite a lot from this!

Linux Voice

From the editorial team of the late, great Linux Voice magazine. They’re amusing to listen to, and they’ve often got up to quite a bit. Notable for “Voice of the Masses”, which involves various audience polls, and “Finds”: random things and bits of software they’ve discovered this week and want to mention.

Mark Steadman’s Escape Hatch

Mark Steadman and his guest on Brum Radio do an extended hour-long interview and also play a sort of Choose Your Own Adventure game; something like a very constrained role-playing session. When I was on it I won and didn’t die! Yay! So that’s encouraging.

Late Night Linux

Traditional-style Linux podcast, but a good laugh; Joe, Ikey, Félim, and Jesse kick around some ideas, Ikey goes on about the distro he builds, they look at news and goings-on. For Linux people only, pretty much, but if you are then it’s one of the better ones.

And finally…

There is Bad Voltage.

Ubuntu Insights: Cloud Chatter: March 2017

$
0
0

Our March edition is packed with exciting content. We begin with our recent announcement of Ubuntu 12.04 Extended Security Maintenance providing ongoing security updates for Ubuntu 12.04 LTS at least another year. Download our latest ‘Carrier Cloudification’ eBook, or join our upcoming webinars on OpenStack, Containers, GPUs/Deep Learning and VNF deployments. Check out our roundup of key demos shown on our booth at Mobile World Congress. We’ve also included a fantastic host of tutorials for MAAS and Containers.

Introducing Ubuntu 12.04 Extended Security Maintenance (ESM)

Ubuntu 12.04 LTS was released in April 2012, and with all LTS releases, Canonical has provided ongoing security patches and bug fixes for a period of 5 years. The Ubuntu 12.04 LTS period will end on Friday, April 28, 2017.

Following the end-of-life, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04. These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers on a per-node or per hour basis.

All Ubuntu 12.04 LTS users are encouraged to upgrade to Ubuntu 14.04 LTS or Ubuntu 16.04 LTS. But for those who cannot upgrade immediately, Ubuntu 12.04 ESM updates will help ensure the on-going security and integrity of Ubuntu 12.04 systems.

To learn more about Ubuntu 12.04 ESM, and the options available for upgrading your Ubuntu 12.04 systems today, watch this on-demand webinar.

Learn the secrets to innovative and scalable VNF deployment

Are you a service provider interested in understanding the state-of-the-art for orchestration of VNF services and looking at the best practices? Or are you a networking vendor wanting to understand what NFV orchestration and service modelling can provide? Then join our upcoming webinar on the 7th April when experts from Canonical and Fraunhofer FOKUS will demonstrate how to easily deploy scalable VNF services using Open Baton and Juju. Learn more

Commoditizing artificial intelligence with Kubernetes and GPUs

While Deep Learning and AI are considered the fuel of IT growth for 2017, little has been done outside of the cloud to help enterprises to adopt it. In our upcoming webinar (12th April), we’ll discuss how such commoditization can be achieved using Kubernetes, nVidia GPUs, and the operation toolbox provided by Canonical. Register for the webinar

Carrier cloudification: what every telecom executive needs to know

The cloud has forced today’s telecom service providers to transform and perform at increasingly high speeds, counter to their normal mode of operations. Due to customer demand, operators are moving away from just providing connectivity, and have realised they can compete at the customer or regional level for a share of wallet by providing value-added services that public cloud providers simply cannot compete with. In our latest ebook, we talk about the solution and expertise required to help telecoms make the move to delivering revenue generating services much faster. Read the eBook

GPUs and Kubernetes for Deep Learning

The Canonical Distribution of Kubernetes is the only distribution that natively supports GPU, enabling Deep Learning and Media workloads. In a three-part tutorial, we show you how to deploy Kubernetes with GPUs and add EFS storage to your how to define, design, deploy and operate a Deep Learning pipeline using Tensorflow automation. Read more to get started

Join our OpenStack and Containers Office Hours

Our new ‘Office Hours’ sessions help community members and customers deploy, manage, and scale their Ubuntu-based cloud infrastructure. These interactive webinars, hosted by a senior engineer from our cloud architecture team cover a range of topics around OpenStack and containers. Register and learn more

Get up and running with Kubernetes

Our Kubernetes webinar series will be a covering a range of operational tasks over the next few months. These will include upgrades, backups, snapshots, scalability, and other operational concerns that will allow you to completely manage your Kubernetes cluster(s) with confidence. The available webinars in the series so far include:

  • Getting started with the Canonical Distribution of Kubernetes – watch on-demand
  • Learn the secrets of validation and testing your Kubernetes cluster – watch on-demand
  • Painless Kubernetes upgrades – register now

Another successful Mobile World Congress show

‘Software-defined everything’ represents a step change in the telco industry in particular. The entire industry is moving away from a mode of organising and thinking about their network and services as appliances with fixed functions to stacks of interacting software. At Mobile World Congress 2017, we told the story of how we help our customers to build modern cloud infrastructure that allows them to deploy new services with greater speed and agility. View our key selection of demos

Top posts from Insights

Ubuntu Cloud in the news

OpenStack, SDN & NFV

Containers & Storage

Big data / Machine Learning / Deep Learning

Ubuntu Insights: The MirAL Story

$
0
0

This is a guest post by Alan Griffiths, Software Developer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

I’m Alan Griffiths and I’m a software developer. Being a software developer means I deal with a lot of problems that are rarely appreciated by non-developers. This is a story about dealing successfully with one of these problems.

Software developers often talk about “technical debt”. This phrase comes from a metaphor that tries to explain the issue without being “too technical”. I think the term was first used by Ward Cunningham, but I could be wrong.

The metaphor describes the effect of doing things in a way that meets the immediate goals but introduces future costs. For example, using fixed English text in an application can get it working for a demo or even an initial release, but if it needs to work in other languages there will be a lot of changes needed. Not just to text, but to assumptions about layout.

The problem with this metaphor is that other people (in the business the developers work for) are used to working with bank loans and other forms of acceptable debt and don’t realise the true cost of “technical debt” – it is more “payday loan” than “mortgage”.

As a result many software projects struggle with technical debt that adds a significant, hidden cost to the business. I hope a few of them will be inspired by this success to find a solution of their own.

What is Mir?
The project I’m working on is Mir. And it had a form of technical debt.

Mir is a set of libraries providing the facilities for the participants in display management: the servers that manage organising windows on one or more screens and the clients (or applications) that provide the content of the windows.

On the server side Mir has been in use by the Unity8 window manager on phones and tablets and as a preview on the Ubuntu 16.10 desktop.

There are multiple downstream projects using the client side of Mir: there’s Mir support available in GTK, Qt and SDL2. In addition there’s an X11 server that runs on Mir: Xmir, this allows X based applications to run on Mir servers.

Under the hood there are facilities to load plugin modules for different graphics platforms, input platforms and render platforms. At the time of writing the supported platforms are the Mesa and Android graphics stacks. This enables running the server either directly on these drivers or as a client of either Mir itself or of X11.

While having Unity8 use Mir on real devices that ship to real customers has been a big benefit, the close association of the two projects has had some downsides. (Just like the “only English” application mentioned above.)

The Debt
To explain the problem I need to introduce the terms API and ABI. The API (Application Programming Interface) is used by other programmers to write code that works with Mir and the ABI (Application Binary Interface) is how the code works with Mir when it runs. Changes to the API and ABI can break programs that use Mir. They are considered stable if they only change in ways that allow the existing programs to work.

Because the client side of Mir has multiple projects using it, the importance of a stable API and ABI has been both appreciated and acted upon.

On the server side things haven’t been so disciplined. The API has evolved gradually and the ABI has broken on almost every release.

The slow evolution of the server API has been managed by maintaining a “compatibility branch” of Unity8 and releasing those changes to Unity8 at the same time as every Mir release. In practice, it isn’t just Unity8 that we need to keep in step, there are additional projects (unity-system-compositor and QtMir) involved in this dance, which makes it even more involved and expensive.

Maintaining and releasing these compatibility branches significantly increases the effort involved in releasing Mir. Changes need to be made and tested across a family of projects belonging to separate teams. Because the pain involved increased gradually this didn’t get the attention I felt it deserved. It is actually ridiculous to take several weeks and man-days of effort to release software!

The unstable server API and ABI has another, indirect cost: it makes it impractical for anyone outside of the few Canonical teams we work with to write and maintain a Mir server. Unless someone updates and releases such a server with every Mir release it would be broken by each release.

This problem seemed unlikely to change in the normal course of events. The Mir server API was not designed with ABI stability in mind and the development focus was on delivering more features and not reducing the cost of this technical debt. While some efforts have been made by the team which have reduced the API churn they didn’t affect the underlying issue.

Elevating a system from chaos to order takes energy and conscious effort. And energy was constantly being drained by managing the release process.

Repaying The Debt: MirAL
Canonical has a policy of allowing staff to work on projects they choose for half a day each week (subject to some reasonable conditions). And, having checked with management, I elected to work on providing an alternative, stable API and ABI for writing window managers.

I began by setting up a separate “Mir Abstraction Layer” project (MirAL) and populating it with a copy of the code from the Mir example servers. This immediately identified a series of bugs in the Mir packaging that had gone undetected. Nothing hard to fix, but obstacles to using Mir: headers referenced but not installed, overlooked dependencies on other projects, incorrect pkg-conf files, and so forth. I filed the Mir bugs and fixed them in my “day job”.

I then started separating out the generic window management logic from the specifics in the examples and building an API between them. Thus emerged the three principle interfaces of this library: a “window management policy”, a “basic window manager” into which a policy slotted and “window management tools” which provides functionality.

This meant that the basic window management functionality, like the placement of menus, could be shared between different approaches to window management:

  • a “normal” desktop;
  • a “tiling” version; and,
  • a “kiosk” to support embedded uses

Having got these in place, along with some other meaningful concepts, I started rework to protect against the types of ABI breakage that were all too common with the existing server APIs. There are a lot of things that can break ABIs and APIs. Public data structures and virtual function tables in particular are fragile with respect to changes. One technique I used a lot was the “Cheshire Cat” idiom to avoid these.

I’d got to the point where I’d proved to myself that this approach could work when a new priority arrived in my “day job”. This was to provide window management support for Unity8 on the desktop. Until this point, Unity8 had only had to deal with the “one screen, one active application, one window” context of the phone and then an extension of this to support “sidestage” on the tablet form. And in Mir much of the window management support was example and test code.

Repurposing MirAL

What was needed was a place to consolidate the existing window management support and iterate quickly towards a more integrated approach. To support additional projects beyond Unity8 (both from within Canonical or from outside) this also needed to be a place where other shells and desktop environments can leverage this functionality.

It sounded a lot like what I’d started with MirAL. As “the business” could more readily appreciate the value of this functionality than the cost of the technical debt. So MirAL switched from being my hobby to being my “day job” and gained Unity8, a real world shell, as a prospective downstream.

A colleague (Gerry Boland) who had done much of the work integrating Unity8 with Mir and I started working to make it possible to update the QtMir project (which does the integration) to use the new API and exploit the window management support it provides.

We copied the code from the QtMir project into the MirAL source tree and started work on joining things together. This proved very helpful in refining the concepts in the MirAL API, identifying gaps in the functionality it provides and soon gave us confidence that this was a workable approach.

Once the effectiveness of this approach had been established, work was started on joining things up all the way into Unity8 and integrating it with the more sophisticated animations and transitions it implements.

It took a few months to get MirAL added to the archive and to integrate all the changes back into the original QtMir. Recent versions of Unity8 have used this work to get window management working for the desktop.

MirAL for Testing Client Applications
Another piece of work happening around the same time was the effort to get third party applications to work correctly with Mir shells.

The window management work I was doing in MirAL and especially the sample “miral-shell” has become a testing ground for how window management features used by applications interact with the Mir support.

The MirAL shell examples can be installed and run as follows:


$ sudo apt install miral-examples
$ miral-app

To aid with testing MirAL comes with a handy script (miral-app) to set up a “miral-shell” running in a desktop window. There’s also a “miral-desktop” script that runs miral-shell as a full desktop.

Debugging Window Management
While working to track down problems in the interaction between toolkits and MirAL’s window management I found the time to introduce an immensely helpful logging facility that logs all calls into the window management policy and all the calls made to the window management tools. This has been essential in discovering why problems we see are happening. It has helped diagnose bugs in MirAL, the window management policies and the toolkit backends. This can be used with any server based on MirAL by adding “‑‑window‑management‑trace” to the command-line:

$ miral-app --window-management-trace

Applications Using Toolkits With Mir Backends
Most applications are not written directly against X-Windows but use “toolkits” that provide higher level concepts and these have requirements on window management (like putting tooltips in the right place).

Two toolkits of immediate significance are GTK+ (on which gnome applications are built) and Qt (which is widely used in Canonical) both of which already have optional Mir “backends”. However, the amount of testing these received had been limited whilst work was focussed on the phone.

These are now being tested with both Unity8 and miral-shell. They can be started directly from a terminal in a “miral-app” session, or from outside using “miral-run ”.

X11 Applications
Not all applications use toolkits that support Mir directly and supporting X11 applications by running the Xmir an X-server based on Mir is a bit fiddly. To facilitate testing this approach with MirAL there’s another script “miral-xrun” that finds a free port, starts an X server, runs the application and then closes the X server when the application exits.

The State Of MirAL Now
It is now a year since I started MirAL as an elective project on Launchpad [https://launchpad.net/miral] and it is shipping in Ubuntu Zesty [17.04]! (For earlier releases of Ubuntu see “Note on getting MirAL” below.)

MirAL is also available for Raspberry Pi, DragonBoard and other devices running Ubuntu Core as a snap. The “Mir Kiosk” mentioned here is the miral-kiosk example program: [https://developer.ubuntu.com/en/snappy/guides/mir-snaps/].

For a developer using MirAL it is very easy to create Mir based window managers. The API needed by developers is installed like this:


$ sudo apt install libmiral-dev

As an experiment, a colleague created this (rather silly) shell in under an hour: [https://github.com/BrandonSchaefer/bad-shell].

Mir based window managers do not need to be reworked and rebuilt when Mir (or MirAL) is released. That is the result of MirAL providing a stable ABI.

At the time of writing not all of the Unity8 dependencies on the Mir “server” APIs have yet been replaced by MirAL.The reason is that in addition to the window management functionality that MirAL supports, Unity8 customizes compositing the displays.

This means that we still need to update and release Unity8 when we release Mir. While the size of the changes and the consequent testing has been reduced we are still “paying interest” on the “technical debt” and I still intend to fix that.

Work is continuing to improve the window management capabilities (we recently introduced support for “workspaces”) and to offer the Mir compositing facilities used by QtMir that are not currently supported. These too will be presented in a form suitable for consumption by other projects.

For the latest information visit my “Canonical Voices” blog [http://voices.canonical.com/alan.griffiths/category/miral/]

Note on getting MirAL
Yakkety [16.10] has only version 0.2 of MirAL (the current release is 1.3) and Xenial [16.04LTS] doesn’t have it at all. (If you’re using the Xenial “stable phone overlay” ppa then that has the latest MirAL release, but unless you already know about and use that ppa this isn’t an advisable way to try MirAL.)

If you’re not yet on Zesty or want the latest version of MirAL it can be built from source on Xenial, Yakkety and Zesty by “checking out” the source:


$ bzr branch lp:miral

This creates a “miral” directory with more instructions in ‘miral/getting_and_using_miral.md’.

Bryan Quigley: Canonical is hiring


Canonical Design Team: What we learned at our first official GV design sprint

$
0
0

Last month the web team ran its first design sprint as outlined in The Sprint Book, by Google Ventures’ Jake Knapp. Some of us had read the book recently and really wanted to give the method a try, following the book to the letter.

In this post I will outline what we’ve learned from our pilot design sprint, what went well, what could have gone better, and what happened during the five sprint days. I won’t go into too much detail about explaining what each step of the design sprint consists of — for that you have the book. If you don’t have that kind of time, but would still like to know what I’m talking about, here’s an 8-minute video that explains the concept:

 

Before the sprint

One of the first things you need to do when running a design sprint is to agree on a challenge you’d like to tackle. Luckily, we had a big challenge that we wanted to solve: ubuntu.com‘s navigation system.

 

ubuntu.com navigation layers: global nav, main nav, second and third level navubuntu.com’s different levels of navigation

 

Assigning roles

If you’ve decided to run a design sprint, you’ve also probably decided who will be the Facilitator. If you haven’t, you should, as this person will have work to do before the sprint starts. In our case, I was the Facilitator.

My first Facilitator task was to make sure we knew who was going to be the Decider at our sprint.

We also agreed on who was going to participate, and booked one of our meeting rooms for the whole week plus an extra one for testing on Friday.

My suggestion for anyone running a sprint for the first time is to also name an Assistant. There is so much work to do before and during the sprint, that it will make the Facilitator’s life a lot easier. Even though we didn’t officially name anyone, Greg was effectively helping to plan the sprint too.

Evangelising the sprint

In the week that preceded the sprint, I had a few conversations with other team members who told me the sprint sounded really great and they were going to ‘pop in’ whenever they could throughout the week. I had to explain that, sadly, this wasn’t going to be possible.

If you need to do the same, explain why it’s important that the participants commit to the entire week, focusing on the importance of continuity and of accumulated knowledge that the sprint’s team will gather throughout the week. Similarly, be pleasant but firm when participants tell you they will have to ‘pop out’ throughout the week to attend to other matters — only the Decider should be allowed to do this, and even so, there should be a deputy Decider in the room at all times.

Logistics

Before the sprint, you also need to make sure that you have all the supplies you need. I tried as much as possible to follow the suggestions for materials outlined in the book, and I even got a Time Timer. In retrospect, it would have been fine for the Facilitator to just keep time on a phone, or a less expensive gadget if you really want to be strict with the no-phones-in-the-room policy.

Even though the book says you should start recruiting participants for the Friday testing during the sprint, we started a week before that. Greg took over that side of the preparation, sending prompts on social media and mailing lists for people to sign up. When participants didn’t materialise in this manner, Greg sent a call for participants to the mailing list of the office building we work at, which worked wonders for us.

Know your stuff

Assuming you have read the book before your sprint, if it’s your first sprint, I recommend re-reading the chapter for the following day the evening before, and take notes.

I printed out the checklists provided in the book’s website and wrote down my notes for the following day, so everything would be in one place.

 

Facilitator checklist with handwritten notesFacilitator checklists with handwritten notes

 

I also watched the official video for the day (which you can get emailed to you by the Sprint Bot the evening before), and read all the comments in the Q&A discussions linked to from the emails. These questions and comments from other people who have run sprints was incredibly useful throughout the week.

 

Sprint Bot emailSprint Bot email for the first day of the sprint

 

Does this sound like a lot of work? It was. I think if/when we do another sprint the time spent preparing will probably be reduced by at least 50%. The uncertainty of doing something as involved as this for the first time made it more stressful than preparing for a normal workshop, but it’s important to spend the time doing it so that things run smoothly during the sprint week.

Day 1

The morning of the sprint I got in with plenty of time to spare to set up the room for the kick-off at 10am.

I bought lots of healthy snacks (which were promptly frowned on by the team, who were hoping for sweater treats); brought a jug of water and cups, and all the supplies to the room; cleared the whiteboards; and set up the chairs.

What follows are some of the outcomes, questions and other observations from our five days.

Morning

In the morning of day 1 you define a long term goal for your project, list the ways in which the project could fail in question format, and draw a flowchart, or map, of how customers interact with your product.

  • Starting the map was a little bit tricky as it wasn’t clear how the map should look when there are more than one type of customer who might have different outcomes
  • In the book there are no examples with more than one type of customer, which meant we had to read and re-read that part of the book until we decided how to proceed as we have several customer types to cater for
  • Moments like these can take the confidence in the process away from the team, that’s why it’s important for the Facilitator to read everything carefully more than once, and ideally for him or her not to be the only person to do so
  • We did the morning exercises much faster than prescribed, but the same didn’t happen in the afternoon!

 

The team discussing the target for the sprint in front of the journey mapDiscussing the target for the sprint

 

Afternoon

In the afternoon experts from the sprint and guests come into the room and you ask them lots of questions about your product and how things work. Throughout the interviews the team is taking notes in the “How Might We” format (for example, “How might we reduce the amount of copy?”). By the end of the interviews, you group the notes into themes, vote on the ones you find most useful or interesting, move the most voted notes onto their right place within your customer map and pick a target in the map as the focus for the rest of the sprint.

  • If you have time, explain “How Might We” notes work before the lunch break, so you save that time for interviews in the afternoon
  • Each expert interview should last for about 15-30 minutes, which didn’t fee like long enough to get all the valuable knowledge from our experts — we had to interrupt them somewhat abruptly to make sure the interviews didn’t run over. Next time it might be easier to have a list of questions we want to cover before the interviews start
  • Choreographing the expert interviews was a bit tricky as we weren’t sure how long each would take. If possible, tell people you’ll call them a couple of minutes before you need them rather than set a fixed time — we had to send people back a few times because we weren’t yet finished asking all the question to the previous person!
  • It took us a little longer than expected to organise the notes, but in the end, the most voted notes did cluster around the key section of the map, as predicted in the book!

 

How Might We notes on the wallsSome of the How Might We notes on the wall after the expert interviews

 

Other thoughts on day 1

  • Sprint participants might cancel at the last minute. If this happens, ask yourself if they could still appear as experts on Monday afternoon? If not, it’s probably better to write them off the sprint completely
  • There was a lot of checking the book as the day went by, to confirm we were doing the right thing
  • We wondered if this comes up in design sprints frequently: what if the problem you set out to solve pre-sprint doesn’t match the target area of the map at the end of day 1? In our case, we had planned to focus on navigation but the target area was focused on how users learn more about the products/services we offer

A full day of thinking about the problem and mapping it doesn’t come naturally, but it was certainly useful. We conduct frequent user research and usability testing, and are used to watching interviews and analysing findings, nevertheless the expert interviews and listening to different perspectives from within the company was very interesting and gave us a different type of insight that we could build upon during the sprint.

Day 2

By the start of day 2, it felt like we had been in the sprint for a lot longer than just one day — we had accomplished a lot on Monday!

Morning

The morning of day 2 is spent doing “Lightning Demos” after a quick 20-minute research. These can be anything that might be interesting, from competitor products to previous internal attempts at solving the sprint challenge. Before lunch, the team decides who will sketch what in the afternoon: will everyone sketch the same thing or different parts of the map.

  • We thought the “Lightning Demos” was a great way to do demos — it was fast and captured the most important thing quickly
  • Deciding who would sketch what wasn’t as straightforward as we might have thought. We decided that everyone should do a journey through our cloud offerings so we’d get different ideas on Wednesday, knowing there was the risk of not everything being covered in the sketches
  • Before we started sketching, we made a list of sections/pages that should be covered in the storyboards
  • As on day 1, the morning exercises were done faster than prescribed, we were finished by 12:30 with a 30 minute break from 11-11:30

 

Sketches from lightning demosOur sketches from the lightning demos

 

Afternoon

In the afternoon, you take a few minutes to walk around the sprint room and take down notes of anything that might be useful for the sketching. You then sketch, starting with quick ideas and moving onto a more detailed sketch. You don’t look at the final sketches until Wednesday morning.

  • We spent the first few minutes of the afternoon looking at the current list of participants for the Friday testing to decide which products to focus on in our sketches, as our options were many
  • We had a little bit of trouble with the “Crazy 8s” exercise, where you’re supposed to sketch 8 variations of one idea in 8 minutes. It wasn’t clear what we had to do so we re-read that part a few times. This is probably the point of the exercise: to remove you from your comfort zone, make you think of alternative solutions and get your creative muscles warmed up
  • We had to look at the examples of detailed sketches in the book to have a better idea of what was expected from our sketches
  • It took us a while to get started sketching but after a few minutes everyone seemed to be confidently and quietly sketching away
  • With complicated product offerings there’s the instinct to want to have access to devices to check product names, features, etc – I assumed this was not allowed but some people were sneakily checking their laptops!
  • Naming your sketch wasn’t as easy as it sounded
  • Contrary to what we expected, the afternoon sketching exercises took longer than the morning’s, at 5pm some people were still sketching

 

The team sketchingEveryone sketching in silence on Tuesday afternoon

 

Tuesday was lots of fun. Starting the day with the demos, without much discussion on the validity of the ideas, creates a positive mood in the team. Sketching in a very structured manner removes some of the fear of the blank page, as you build up from loose ideas to a very well-defined sketch. The silent sketching was also great as it meant we had some quiet time to pause and think a solution through, giving the people who tend to be more quiet an opportunity to have their ideas heard on par with everyone else.

Day 3

No-one had seen the sketches done on Tuesday, so the build-up to the unveiling on day 3 was more exciting than for the usual design review!

Morning

On the Wednesday morning, you decide which sketch (or sketches) you will prototype. You stick the sketches on the wall and review them in silence, discuss each sketch briefly and each person votes on their favourite. After this, the Decider casts three votes, which can follow or not the votes of the rest of the team. Whatever the Decider votes on will be prototyped. Before lunch, you decide whether you will need to create one or more prototypes, depending on whether the Decider’s (or Deciders’) votes fit together or not.

  • We had 6 sketches to review
  • Although the book wasn’t clear as to when the guest Decider should participate, we invited ours from 10am to 11.30am as it seemed that he should participate in the entire morning review process — this worked out well
  • During the speed critique people started debating the validity or feasibility of solutions, which was expected but meant some work for the Facilitator to steer the conversation back on track
  • The morning exercises put everyone in a positive mood, it was an interesting way to review and select ideas
  • Narrating the sketches was harder than what might seem at first, and narrating your own sketch isn’t much easier either!
  • It was interesting to see that many of the sketches included similar solutions — there were definite patterns that emerged
  • Even though I emphasised that the book recommends more than one prototype, the team wasn’t keen on it and the focus of the pre-lunch discussion was mostly on how to merge all the voted solutions into one prototype
  • As for all other days, and because we decided for an all-in-one prototype, we finished the morning exercises by noon

 

Reviewing the sketches in silenceThe team reviewing the sketches in silence on Wednesday morning

 

Afternoon

In the afternoon of day 3, you sketch a storyboard of the prototype together, starting one or two steps before the customer encounters your prototype. You should move the existing sketches into the frames of the storyboard when possible, and add only enough detail that will make it easy to build the prototype the following day.

  • Using masking tape was easier than drawing lines for the storyboard frames
  • It was too easy to come up with new ideas while we were drawing the storyboard and it was tricky to tell people that we couldn’t change the plan at this point
  • It was hard to decide the level of detail we needed to discuss and add to the storyboard. We finished the first iteration of the storyboard a few minutes before 3pm. Our first instinct was to start making more detailed wireframes with the remaining time, but we decided to take a break for coffee and come back to see where we needed more detail in the storyboard instead
  • It was useful to keep asking the team what else we needed to define as we drew the storyboard before we started building the prototype the following day
  • Because we read out the different roles in preparation for Thursday, we ended up assigning roles straight away

 

Drawing the storyboardDiscussing what to add to our storyboard

 

Other thoughts on day 3

  • One sprint participant couldn’t attend on Tuesday, but was back on Wednesday, which wasn’t ideal but didn’t impact negatively
  • While setting up for the third day, I wasn’t sure if the ideas from the “Lightning Demos” could be erased from the whiteboard, so I took a photo of them and erased it as, even with the luxury of massive whiteboards, we wouldn’t have had space for the storyboard later on!

By the end of Wednesday we were past the halfway mark of the sprint, and the excitement in anticipation for the Friday tests was palpable. We had some time left before the clock hit 5 and wondered if we should start building the prototype straight away, but decided against it — we needed a good night’s sleep to be ready for day 4.

Day 4

Thursday is all about prototyping. You need to choose which tools you will use, prioritising speed over perfection, and you also need to assign different roles for the team so everyone knows what they need to do throughout the day. The interviewer should write the interview script for Friday’s tests.

  • For the prototype building day, we assigned: two writers, one interviewer, one stitcher, two makers and one asset collector
  • We decided to build the pages we needed with HTML and CSS (instead of using a tool like Keynote or InVision) as we could build upon our existing CSS framework
  • Early in the afternoon we were on track, but we were soon delayed by a wifi outage which lasted for almost 1,5 hours
  • It’s important to keep communication flowing throughout the day to make sure all the assets and content that are needed are created or collected in time for the stitcher to start stitching
  • We were finished by 7pm — if you don’t count the wifi outage, we probably would have been finished by 6pm. The extra hour could have been curtailed if there had been just a little bit more detail in the storyboard page wireframes and in the content delivered to the stitcher, and fewer last minute tiny changes, but all-in-all we did pretty well!

 

Maker and asset collector working on the prototypeJoana and Greg working on the prototype

 

Other thoughts on day 4

  • We had our sprint in our office, so it would have been possible for us to ask for help from people outside of the sprint, but we didn’t know whether this was “allowed”
  • We could have assigned more work to the asset collector: the makers and the stitcher were looking for assets themselves as they created the different components and pages rather than delegating the search to the asset collector, which is how we normally work
  • The makers were finished with their tasks more quickly than expected — not having to go through multiple rounds of reviews that sometimes can take weeks makes things much faster!

By the end of Thursday there was no denying we were tired, but happy about what we had accomplished in such a small amount of time: we had a fully working prototype and five participants lined up for Friday testing. We couldn’t wait for the next day!

Day 5

We were all really excited about the Friday testing. We managed to confirm all five participants for the day, and had an excellent interviewer and solid prototype. As the Facilitator, I was also happy to have a day where I didn’t have a lot to do, for a change!

Thoughts and notes on day 5

On Friday, you test your prototype with five users, taking notes throughout. At the end of the day, you identify patterns within the notes and based on these you decide which should be the next steps for your project.

  • We’re lucky to work in a building with lots of companies who employ our target audience, but we wonder how difficult it would have been to find and book the right participants within just 4 days if we needed different types of users or were based somewhere else
  • We filled up an entire whiteboard with notes from the first interview and had to go get extra boards during the break
  • Throughout the day, we removed duplicate notes from the boards to make them easier to scan
  • Some participants don’t talk a lot naturally and need a lot of constant reminding to think out loud
  • We had the benefit of having an excellent researcher in our team who already knows and does everything the book recommends doing. It might have been harder for someone with less research experience to make sure the interviews were unbiased and ran smoothly
  • At the end of the interviews, after listing the patterns we found, we weren’t sure whether we could/should do more thorough analysis of the testing later or if we should chuck the post-it notes in the bin and move on
  • Our end-of-sprint decision was to have a workshop the following week where we’d plan a roadmap based on the findings — could this be considered “cheating” as we’re only delaying making a decision?

 

The team in the observation roomThe team observing the interviews on Friday

 

A wall of interview notesA wall of interview notes

 

The Sprint Book notes that you can have one of two results at the end of your sprint: an efficient failure, or a flawed success. If your prototype doesn’t go down well with the participants, your team has only spent 5 days working on it, rather than weeks or potentially months — you’ve failed efficiently. And if the prototype receives positive feedback from participants, most likely there will still be areas that can be improved and retested — you’ve succeeded imperfectly.

At the end of Friday we all agreed that we our prototype was a flawed success: there were things we tested that we’d had never think to try before and that received great feedback, but some aspects certainly needed a lot more work to get right. An excellent conclusion to 5 intense days of work!

Final words

Despite the hard work involved in planning and getting the logistics right, running the web team’s trial design sprint was fun.

The web team is small and stretched over many websites and products. We really wanted to test this approach so we could propose it to the other teams we work with as an efficient way to collaborate at key points in our release schedules.

We certainly achieved this goal. The people who participated directly in the sprint learned a great deal during the five days. Those in the web team who didn’t participate were impressed with what was achieved in one week and welcoming of the changes it initiated. And the teams we work with seem eager to try the process out in their teams, now that they’ve seen what kind of results can be produced in such a short time.

How about you? Have you run a design sprint? Do you have any advice for us before we do it again? Leave your thoughts in the comments section.

Matthias Klumpp: On Tanglu

$
0
0

It’s time for a long-overdue blogpost about the status of Tanglu. Tanglu is a Debian derivative, started in early 2013 when the systemd debate at Debian was still hot. It was formed by a few people wanting to create a Debian derivative for workstations with a time-based release schedule using and showcasing new technologies (which include systemd, but also bundling systems and other things) and built in the open with a community using the similar infrastructure to Debian. Tanglu is designed explicitly to complement Debian and not to compete with it on all devices.

Tanglu has achieved a lot of great things. We were the first Debian derivative to adopt systemd and with the help of our contributors we could kill a few nasty issues affecting it and Debian before it ended up becoming default in Debian Jessie. We also started to use the Calamares installer relatively early, bringing a modern installation experience additionally to the traditional debian-installer. We performed the usrmerge early, uncovering a few more issues which were fed back into Debian to be resolved (while workarounds were added to Tanglu). We also briefly explored switching from initramfs-tools to Dracut, but this release goal was dropped due to issues (but might be revived later). A lot of other less-impactful changes happened as well, borrowing a lot of useful ideas and code from Ubuntu (kudos to them!).

On the infrastructure side, we set up the Debian Archive Kit (dak), managing to find a couple of issues (mostly hardcoded assumptions about Debian) and reporting them back to make using dak for distributions which aren’t Debian easier. We explored using fedmsg for our infrastructure, went through a long and painful iteration of build systems (buildbot -> Jenkins -> Debile) before finally ending up with Debile, and added a set of own custom tools to collect archive QA information and present it to our developers in an easy to digest way. Except for wanna-build, Tanglu is hosting an almost-complete clone of basic Debian archive management tools.

During the past year however, the project’s progress slowed down significantly. For this, mostly I am to blame. One of the biggest challenges for a young project is to attract new developers and members and keep them engaged. A lot of the people coming to Tanglu and being interested in contributing were unfortunately no packagers and sometimes no developers, and we didn’t have the manpower to individually mentor these people and teach them the necessary skills. People asking for tasks were usually asked where their interests were and what they would like to do to give them a useful task. This sounds great in principle, but in practice it is actually not very helpful. A curated list of “junior jobs” is a much better starting point. We also invested almost zero time in making our project known and create the necessary “buzz” and excitement that’s actually needed to sustain a project like this. Doing more in the advertisement domain and “help newcomers” area is a high priority issue in the Tanglu bugtracker, which to the day is still open. Doing good alone isn’t enough, talking about it is of crucial importance and that is something I knew about, but didn’t realize the impact of for quite a while. As strange as it sounds, investing in the tech only isn’t enough, community building is of equal importance.

Regardless of that, Tanglu has members working on the project, but way too few to manage a project of this magnitude (getting package transitions migrated alone is a large task requiring quite some time while at the same time being incredibly boring :P). A lot of our current developers can only invest small amounts of time into the project because they have a lot of other projects as well.

The other issue why Tanglu has problems is too much stuff being centralized on myself. That is a problem I wanted to rectify for a long time, but as soon as a task wasn’t done in Tanglu because no people were available to do it, I completed it. This essentially increased the project’s dependency on me as single person, giving it a really low bus factor. It not only centralizes power in one person (which actually isn’t a problem as long as that person is available enough to perform tasks if asked for), it also centralizes knowledge on how to run services and how to do things. And if you want to give up power, people will need the knowledge on how to perform the specific task first (which they will never gain if there’s always that one guy doing it). I still haven’t found a great way to solve this – it’s a problem that essentially kills itself as soon as the project is big enough, but until then the only way to counter it slightly is to write lots of documentation.

Last year I had way less time to work on Tanglu than the project deserves. I also started to work for Purism on their PureOS Debian derivative (which is heavily influenced by some of the choices we made for Tanglu, but with different focus – that’s probably something for another blogpost). A lot of the stuff I do for Purism duplicates the work I do on Tanglu, and also takes away time I have for the project. Additionally I need to invest a lot more time into other projects such as AppStream and a lot of random other stuff that just needs continuous maintenance and discussion (especially AppStream eats up a lot of time since it became really popular in a lot of places). There is also my MSc thesis in neuroscience that requires attention (and is actually in focus most of the time). All in all, I can’t split myself and KDE’s cloning machine remains broken, so I can’t even use that ;-). In terms of projects there is also a personal hard limit of how much stuff I can handle, and exceeding it long-term is not very healthy, as in these cases I try to satisfy all projects and in the end do not focus enough on any of them, which makes me end up with a lot of half-baked stuff (which helps nobody, and most importantly makes me loose the fun, energy and interest to work on it).

Good news everyone! (sort of)

So, this sounded overly negative, so where does this leave Tanglu? Fact is, I can not commit the crazy amounts of time for it as I did in 2013. But, I love the project and I actually do have some time I can put into it. My work on Purism has an overlap with Tanglu, so Tanglu can actually benefit from the software I develop for them, maybe creating a synergy effect between PureOS and Tanglu. Tanglu is also important to me as a testing environment for future ideas (be it in infrastructure or in the “make bundling nice!” department).

So, what actually is the way forward? First, maybe I have the chance to find a few people willing to work on tasks in Tanglu. It’s a fun project, and I learned a lot while working on it. Tanglu also possesses some unique properties few other Debian derivatives have, like being built from source completely (allowing us things like swapping core components or compiling with more hardening flags, switching to newer KDE Plasma and GNOME faster, etc.). Second, if we do not have enough manpower, I think converting Tanglu into a rolling-release distribution might be the only viable way to keep the project running. A rolling release scheme creates much less effort for us than making releases (especially time-based ones!). That way, users will have a constantly updated and secure Tanglu system with machines doing most of the background work.

If it turns out that absolutely nothing works and we can’t attract new people to help with Tanglu, it would mean that there generally isn’t much interest from the developer or user side in a project like this, so shutting it down or scaling it down dramatically would be the only option. But I do not think that this is the case, and I believe that having Tanglu around is important. I also have some interesting plans for it which will be fun to implement for testing 🙂

The only thing that had to stop is leaving our users in the dark on what is happening.

Sorry for the long post, but there are some subjects which are worth writing more than 140 characters about 🙂

If you are interested in contributing to Tanglu, get in touch with us! We have an IRC channel #tanglu-devel on Freenode (go there for quicker responses!), forums and mailinglists,

It looks like I will be at Debconf this year as well, so you can also catch me there! I might even talk about PureOS/Tanglu infrastructure at the conference.

Ubuntu Insights: Nexiona collaborates with Canonical and Dell to create MIIMETIQ Edge

$
0
0

There is a perception that IoT projects are complex, expensive and therefore limited to larger companies that have the means to manage them. However, the reality is very different. Nexiona, creators of IoT technology for system integrators, has developed a package that’s affordable to all company sizes – with a process that’s simple enough for any system integrator to install.

The package is an all-in-one IoT solution within a single box. It combines Nexiona’s MIIMETIQ EDGE platform, Dell’s robust and affordable hardware and Ubuntu’s secure and well-known OS that easily adapts to enterprise – a solution unlike anything else on the market.

Learn more about the product, how it can benefit your SME and it’s presence in various market sectors in the case study below.

Download the case study

Andrew SB: A Few Recent DigitalOcean Tools

$
0
0

Obligatory comment about how it’s been quite awhile since the last time I’ve blogged…

With that out of the way, I wanted to share a few things that I’ve done recently that might be useful for others. In particular, there are a couple DigitalOcean related tools that might come in handy.

fabric-digitalocean

As I’ve written before, Fabric is a great tool for automating some basic systems administration tasks. Recently, I wrote fabric-digitalocean in order to make it easier to use Fabric with DigitalOcean Droplets. It provides an @droplets decorator for use in your Fabfiles. It can take a list of Droplet IDs, a tag, or a region as an argument. Then, using the DigitalOcean API, these arguments are expanded to provide Fabric with a list of hosts.

More and more, tagging is becoming an important way to interact with DigitalOcean resources. For example, DO Load Balancers use tags for service discovery, and the newly released Monitoring functionality allows you to create alert policies based on tags. If you’re already using tags across your fleet, the ability to run tasks on instances based on how they are tagged is extremely convenient.

As a quick example, this Fabfile could be used to run the uptime on all Droplets tagged “production”:

fromfabric.apiimporttask,runfromfabric_digitalocean.decoratorsimportdroplets@task@droplets(tag='production')defexample():run('uptime')

It can be installed via pip with:

pip install fabric-digitalocean

The source is available on GitHub. I’d love to hear any ideas you might have for other integration points between Fabric and the DOAPI.

DigitalOcean monitoring agent Ansible role

While Fabric obviously still has a place in my toolkit, Ansible has taken on a growing role in how I manage my infrastructure. It is flexible enough for both running one-off tasks and standing up services fully under configuration management.

In Janurary, DigitalOcean released an open-source monitoring agent. It’s used to power both the graphs displaying Droplet metrics as well the new Monitoring and alerting features. You can optionally install the agent when creating new Droplets. Though if you have an existing fleet, it can be a bit tedious to install it on all of your currently running instances.

When I was backfilling the agent onto my existing Droplets, I wanted a single Ansible role that I could use regardless of the underlaying distribution. I was also eager to see what it takes to get something up on Ansible Galaxy, their hub for sharing and distributing roles. So I put together a role to install the agent on all supported distros and made it available there.

You can install it with:

ansible-galaxy install andrewsomething.do-agent

Once installed, an example playbook simply looks like:

- hosts: all
  become: true
  roles:
     - andrewsomething.do-agent

The source is also available on GitHub.

Ted Gould: X11 apps on Ubuntu Personal

$
0
0

Snaps first launched with the ability to ship desktop apps on Ubuntu 16.04, which is an X11 based platform. It was noted that while secure and containerized, the fact that many Snaps were using X11 this made them less secure than they could be. It was a reality of shipping Snaps for 16.04, but something we definitely want to fix for 18.04 using Unity8 and the Mir graphics stack. We can’t just ignore all the apps that folks have built for 16.04 though, so we need a solution to run X11 applications on Unity8 securely.

To accomplish this we give each X11 application its own instance of the XMir server. This means that even evil X applications that use insecure features of (or find vulnerabilities in) the Xorg server, they’re only compromising their individual instance of the Xserver and are unable to affect other applications. Sounds simple, right? Unfortunately there is a lot more to making an application experience seamless than just handling the graphic buffers and making sure it can display on screen.

The Mir server is designed to handle graphics buffers and their positions on the screen, it doesn’t handle all the complexities of things like cut-and-paste and window menus. To help make X11 apps that use these features we’re using some pieces of the libertine project which runs X11 apps in LXD containers. It has in it a set of helpers, like pasted, who handle these additional protocols. pasted watches the selected window and the X11 clip buffers to connect into Unity8’s cut-and-paste mechanisms which behave very differently. For instance, Unity8 doesn’t allow or snooping on clip buffers to steal passwords.

It is also important at this point to note that in Ubuntu Personal we aren’t just snapping up applications, we are snapping everything. We expect to have snaps of Unity8, snaps of Network Manager and a snap of XMir. This means that XMir isn’t even running in the same security context as Unity8. A vulnerability in XMir only compromises XMir and the files that it has access to. This means that a bug in an X11 application would have get into XMir and then work on the Mir protocol itself before getting to other applications or user session resources.

The final user experience? We hope that no one notices that their applications are X11 applications or Mir applications, users shouldn’t have to care about display servers. What we’ve tried to create is a way for them to still have their favorite X11 applications, as hopefully they transition away from X11, while still being able to get the security benefits of a Mir based desktop.

Viewing all 17727 articles
Browse latest View live