Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Colin King: Working towards stress-ng 0.10.00

$
0
0
Over the past 9+ months I've been cleaning up stress-ng in preparation for a V0.10.00 release.   Stress-ng is a portable Linux/UNIX Swiss army knife of micro-benchmarking kernel stress tests.

The Ubuntu kernel team uses stress-ng for kernel regression testing in several ways:
  • Checking that the kernel does not crash when being stressed tested
  • Performance (bogo-op throughput) regression checks
  • Power consumption regression checks
  • Core CPU Thermal regression checks
The wide range of micro benchmarks in stress-ng allow us to keep track of a range of metrics so that we can catch regressions.

I've tried to focus on several aspects of stress-ng over the last last development cycle:
  • Improve per-stressor modularization. A lot of code has been moved from the core of stress-ng back into each stress test.
  • Clean up a lot of corner case bugs found when we've been testing stress-ng in production.  We exercise stress-ng on a lot of hardware and in various cloud instances, so we find occasional bugs in stress-ng.
  • Improve usability, for example, adding bash command completion.
  • Improve portability (various kernels, compilers and C libraries). It really builds on runs on a *lot* of Linux/UNIX/POSIX systems.
  • Improve kernel test coverage.  Try to exercise more kernel core functionality and reach parts other tests don't yet reach.
Over the past several days I've been running various versions of stress-ng on a gcov enabled 5.0 kernel to measure kernel test coverage with stress-ng.  As shown below, the tool has been slowly gaining more core kernel coverage over time:

With the use of gcov + lcov, I can observe where stress-ng is not currently exercising the kernel and this allows me to devise stress tests to touch these un-exercised parts.  The tool has a history of tripping kernel bugs, so I'm quite pleased it has helped us to find corners of the kernel that needed improving.

This week I released V0.09.59 of stress-ng.  Apart from the usual sets of clean up changes and bug fixes, this new release now incorporates bash command line completion to make it easier to use.  Once the 5.2 Linux kernel has been released and I'm satisfied that stress-ng covers new 5.2 features I will  probably be releasing V0.10.00. This  will be a major release milestone now that stress-ng has realized most of my original design goals.

LoCo Ubuntu PT: Ubucon Portugal 2019 – Rescaldo

$
0
0

No dia 6 de Abril,realizou-se na ISCTE, o evento Ubucon Portugal 2019, este evento foi organizado pela Comunidade Ubuntu Portugal, o ISCTE – Instituto Universitário de Lisboa, o ISTAR-IUL – Information Sciences and Technologies and Architecture Research Center e o ISCTE-IUL ACM Student Chapter. O intuito deste evento foi o de divulgar o software e comunidade Open Source sobre a alçada da comunidade portuguesa de Ubuntu, no entanto acabou por se tornar num evento muito mais amplo do que isso, podendo destacar-se por as seguintes palavras chaves:

  • Consciencialização: Durante o decorrer do evento, os participantes foram consciencializados várias vezes para os nossos direitos digitais, o que fazer para os manter, lutar por eles e acima de tudo compreendê-los. Foram analisados temas como os infames artigos 11 e 13 tais como as alterações que irão existir aquando da implementação dos mesmos pela União Europeia e adaptação por cada estado membro.
    Foram ainda consciencializados relativamente ao tipo de licenciamento de software que existe e como é possível proteger o trabalho que é desenvolvido de forma aberta, tal como todos os processos que são necessários para se conseguir alterar a nossa legislação para ter em conta estes aspectos.
  • Comunidade: Sendo este um evento destinado à comunidade, foi possível assistir as apresentação de projectos que são impulsionados por membros das nossa comunidade.
    Foi possível descobrir a evolução do ambiente gráfico KDE desde a primeira versão de todas até ao tão aclamado Kde plasma e o seu ecossistema, deixando o Kde de ser apenas um ambiente gŕafico e passando a ser um ecossistema orientado para a comunidade. Foi possível ainda desmistificar o velho tema “O linux não serve para jogar”, onde foram apresentados os números atuais de jogos que são suportados para linux nativamente ou através de software de terceiros, esta apresentação demonstrou ainda os esforços que estão a ser desenvolvidos pela Steam, para tornar o Linux uma plataforma para jogos e conseguir desmistificar de uma vez por todas que em Linux não se joga ou apenas se jogam jogos mais simples.
  • Futuro: Não é possível projectar o futuro sem primeiro compreender o passado, como tal foram apresentados os planos futuros da comunidade portuguesa de Ubuntu, quais são os planos e missões da mesma no futuro. Os participantes foram ainda contemplados pela oferta formativa do ISCTE a nível de mestrado na área de open source, talvez um dos mestrados mais fora da caixa que existe neste momento no panorama nacional de ensino e que dá uma força especial aos apoiantes do open source para seguir um sonho se formarem nesta área. Por fim foi debatida a utilização uma das maiores revoluções industriais dos últimos tempos, a impressão 3D, casos de
    estudo e aplicação, onde é que iremos chegar com esta tecnologia, tal como os impactos que a impressão 3d poderá ter na sociedade industrial e ambiental, recorrendo a produtos que possam ser degradáveis e amigos do ambiente.

Para finalizar, foi um evento cheio de bom ambiente, muita curiosidade pelo assunto do opensource, em que nunca se pôs de parte, uma das coisas mais importantes, o espírito de comunidade e companheirismo por esta causa e por este sonho de ter um mundo tecnológico mais aberto e mais transparente. Se vale a pena participar ?
Claro, mas não só como alguém que vai assistir ao evento, mais sim como alguém que organiza eventos, apoia a comunidade e participa na comunidade.

Nota: Como a comunidade ambiciona sempre mais e mais, assim que acabou a Ubucon Portugal, a comunidade começou logo a prepar a Ubucon Europe, como tal este rescaldo veio de forma mais tardia, no entanto, um evento deste género, não poderia ser esquecido.
Obrigado a todos os que tiveram presentes e que para o ano estejamos todos juntos novamente para a realização da Ubucon Portugal 2020.

Canonical Design Team: Design and Web team summary – 10 June 2019

$
0
0

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.

Web

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

Integrating the blog into www.ubuntu.com

We have been working on integrating the blog into www.ubuntu.com, building new templates and an improved blog module that will serve pages more quickly.

Takeovers and engage pages

We built and launched a few new homepage takeovers and landing pages, including:

– Small Robot Company case study

– 451 Research webinar takeover and engage page

– Whitepaper takeover and engage page for Getting started with AI

– Northstar robotics case study engage page  

– Whitepaper about Active directory

Verifying checksum on download thank you pages

We have added the steps to verify your Ubuntu download on the website. To see these steps download Ubuntu and check the thank you page.

Mir Server homepage refresh

A new homepage hero section was designed and built for www.mir-server.io.

The goal was to update that section with an image related to digital signage/kiosk and also to give a more branded look and feel by using our Canonical brand colours and Suru folds.

Brand

Brand squad champion a consistent look and feel across all media from web to social to print and logos.

Usage guide to using the company slide deck

The team have been working on storyboarding a video to guide people to use the new company slide deck correctly and highlight best practices for creating great slides.

Illustration and UI work

We have been working hard on breaking down the illustrations we have into multiple levels. We have identified x3 levels of illustrations we use and are in the process of gathering them across all our websites and reproducing them in a consistent style.

Alongside this we have started to look at the UI icons we use in all our products with the intention of creating a single master set that will be used across all products to create a consistent user experience.

Marketing support

Created multiple documents for the Marketing team, these included x2 whitepapers and x3 case studies for the Small Robot Company, Northstar and Yahoo Japan.

We also created an animated screen for the stand back wall at Kubecon in Barcelona.

MAAS

The MAAS squad develop the UI for the maas project.

Renamed Pod to KVM in the MAAS UI

MAAS has been using the label “pod” for any KVM (libvert) or RSD host – a label that is not industry standard and can be confused with the use of pods in Kubernetes. In order to avoid this, we went through the MAAS app and renamed all instances of pod to KVM and separated the interface for KVM and RSD hosts.

Replaced Karma tests with Jest

The development team working on MAAS have been focusing on modernising areas of the application. This lead to moving from the Karma test framework to Jest.

Absolute import paths to modules

Another area the development team would like to tackle is migrating from AngularJS to React. To decouple us from Angular we moved to loading modules from a relative path.

KVM/RSD: In-table graphs for CPU, RAM and storage

MAAS CPU, RAM and Storage mini charts
MAAS usage tooltip
MAAS storage tooltip

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI projects.

Design update for JAAS.ai

We have worked on a design update for jaas.ai which includes new colours and page backgrounds. The aim is to bring the website in line with recent updates carried out by the brand team.

Top of the new JAAS homepage

Design refresh of the top of the store page

We have also updated the design of the top section of the Store page, to make it clearer and more attractive, and again including new brand assets.

Top of jaas store page

UX review of the CLI between snap and juju

Our team has also carried out some research in the first step to more closely aligning the commands used in the CLI for Juju and Snaps. This will help to make the experience of using our products more consistent.

Vanilla

The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.  

Vanilla 2.0.0 release

Since our last major release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version of Vanilla yet. These changes have been released in v2.0.0.

Over the past 2 weeks, we’ve been running QA tests across our marketing sites and web applications using our pre-release 2.0.0-alpha version. During testing, we’ve been filing and fixings bugs against this version, and have pushed up to a pre-release 2.0.0-beta version.

Vanilla framework v2.0.0 banner

We plan to launch Vanilla 2.0.0 today once we have finalised our release notes, completed our upgrade document which will help and guide users during upgrades of their sites.

Look out for our press release posts on Vanilla 2.0.0 and our Upgrade document to go along with it.

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Install any snap on any platform

We’re pleased to announce we’ll be releasing distribution install pages for all Snaps. They’re one-stop-shops for any combination of Snap and supported distro. Simply visit https://snapcraft.io/install/spotify/debian or, say https://snapcraft.io/install/vlc/arch. The combinations are endless and not only do the pages give you that comfy at-home feeling when it comes to branding they’re also pretty useful. If you’ve never installed Snaps before we provide some easy step-by-step instructions to get the snap running and suggest some other Snaps you might like.

Snap how to install VSC

The post Design and Web team summary – 10 June 2019 appeared first on Ubuntu Blog.

Podcast Ubuntu Portugal: Ep. 57 – O bom, o mau e o lambão

$
0
0

Neste episódio ficámos a saber quais foram os últimos destinos das viagens do Diogo Constantino, onde é que ele andou a gastar dinheiro, mas também o que andou o Tiago a fazer para não ter estado nos últimos episódios. Já sabes, ouve, subscreve e partilha!

  • https://sintra2019.ubucon.org/
  • https://videos.ubuntu-paris.org/
  • https://slimbook.es/zero-smart-thin-client-linux-windows-fanless
  • https://slimbook.es/pedidos/mandos/mando-gaming-inal%C3%A1mbrico-nox-comprar
  • https://panopticlick.eff.org/
  • https://www.mozilla.org/en-US/firefox/67.0/releasenotes/
  • https://blog.mozilla.org/addons/2019/03/26/extensions-in-firefox-67/
  • https://discourse.ubuntu.com/t/mir-1-2-0-release/11034
  • https://www.linuxondex.com/
  • https://github.com/tcarrondo/aws-mfa-terraform
  • https://devblogs.microsoft.com/commandline/announcing-wsl-2/

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa é: VisualHunt licenciada nos termos da licença: CC0 1.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Full Circle Magazine: Full Circle Weekly News #134

$
0
0

The Fridge: Ubuntu Weekly Newsletter Issue 582

$
0
0

Welcome to the Ubuntu Weekly Newsletter, Issue 582 for the week of June 2 – 8, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Kubuntu General News: Plasma 5.16 for Disco 19.04 available in Backports PPA

$
0
0

We are pleased to announce that Plasma 5.16, is now available in our backports PPA for Disco 19.04.

The release announcement detailing the new features and improvements in Plasma 5.16 can be found here

Released along with this new version of Plasma is an update to KDE Frameworks 5.58. (5.59 will soon be in testing for Eoan 19.10 and may follow in the next few weeks.)

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.16, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.15 as included in the original 19.04 Disco release.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker: https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
4. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

Canonical Design Team: Get to know these 5 Ubuntu community resources

$
0
0
Ubuntu community

As open-source software, Ubuntu is designed to serve a community of users and innovators worldwide, ranging from enterprise IT pros to small-business users to hobbyists.

Ubuntu users have the opportunity to share experiences and contribute to the improvement of this platform and community, and to encourage our wonderful community to continue learning, sharing and shaping Ubuntu, here are five helpful resources:

Ubuntu Tutorials

These tutorials provide step-by-step guides on using Ubuntu for different projects and tasks across a wide range of Linux tools and technologies.
Many of these tutorials are contributed and suggested by users, so this site also provides guidance on creating and requesting a tutorial on a topic you believe needs to be covered.

Ubuntu Community Hub

This community site for user discourse is relatively new and intended for people working at all levels of the stack on Ubuntu. The site is evolving, but currently includes discussion forums, announcements, QA and testing requests, feedback to the Ubuntu Community Council and more.
(

Ubuntu Community Help Wiki page

From installation to documentation of Ubuntu flavours such as Lubuntu and Kubuntu, this wiki page offers instructions and self-help to users comfortable doing it themselves. Learn some tips, tricks and hacks, and find links to Ubuntu official documentation as well as additional help resources.

Ubuntu Server Edition FAQ page

Its ease of use, ability to customise and capacity to run on a wide range of hardware makes Ubuntu the most popular server choice of the cloud age. This FAQs page provides answers to technical questions, maintenance, support and more to address any Ubuntu server queries.

Ubuntu Documentation

If you are a user who relies extensively on Ubuntu documentation, perhaps you can lend a hand to the Documentation Team to help improve it by:

  • Submitting a bug: Sending in a bug report when you find mistakes.
  • Fixing a bug: Proposing a fix an existing bug.
  • Creating new material: Adding to an existing topic or writing on a new topic.

These are just a few of the available resources and recommended suggestions for getting involved in the Ubuntu community. For more, visit ubuntu.com/community.




The post Get to know these 5 Ubuntu community resources appeared first on Ubuntu Blog.


Stephen Michael Kellat: A Modest Ham-Related Proposal

$
0
0

Over the past couple months I have been trying to participate in the Monday morning net run by the SDF Amateur Radio Club from SDF.org. It has been pretty hard for me to catch up with any of the local amateur radio clubs. There is no local club associated with the American Radio Relay League in Ashtabula County but it must be remembered that land-wise Ashtabula County is fairly large in terms of land area.

For reference, the state of Rhode Island and Providence Plantations has a dry land area of 1,033.81 square miles. Ashtabula County has a dry land area of 702 square miles. Ashtabula County is 68% the size of the state of Rhode Island in terms of land area even though population-wise Ashtabula County has 9.23% of Rhode Island's equivalent population. Did I hear mooing off in the distance somewhere? For British readers, it is not only safe to say I'm not just in a fairly isolated area but that it may resemble Ambridge a bit too much.

Now, the beautiful part about the SDF Amateur Radio Club net is that it takes place via the venerable EchoLink system. The package known as qtel allows for access to the repeater-linking network from your Ubuntu desktop. Unlike normal times, the Wikipedia page about EchoLink actually provides a fairly nice write-up for the non-specialist.

Now, there is a relatively old article on the American Radio Relay League's website about Ubuntu. If you look at the Ubuntu Wiki, there is talk about Ubuntu Hams having their own net but the last time that page was edited was 2012. While there is talk of an IRC channel, a quick look at irclogs.ubuntu.com shows that it does not look like the log bot has been in the channel this month. E-mail to the Launchpad Team's mailing list hosted on Launchpad itself is a bit sporadic.

I have been a bit MIA myself due to work pressures. That does not mean I am unwilling to act as the Net Control Station if there is a group willing to hold a net on EchoLink perhaps. It would be a good way to get hams from across the Ubuntu realms to have some fellowship with each other.

For now, I am going to make a modest proposal. If anybody is interested in such an Ubuntu net could you please check in on the SDF ARC net on June 17 at 0000 UTC? To hear what the most recent net sounded like, you can listen to the recorded archive of that net's audio in MP3 format. Just check in on June 17th at 0000 UTC and please stick around until after the net ends. We can talk about possibilities after the SDF net ends. All you need to do is be registered to use EchoLink and have appropriate software to connect to the appropriate conference.

I will cause notice of this blog post to be made to the Launchpad Team's mailing list.

Creative Commons License
A Modest Ham-Related Proposal by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Canonical Design Team: Customisable for the enterprise: the next-generation of drones

$
0
0

Drones, and their wide-ranging uses, have been a constant topic of conversation for some years now, but we’re only just beginning to move away from the hypothetical and into reality. The FAA estimates that there will be 2 million drones in the United States alone in 2019, as adoption within the likes of distribution, construction, healthcare and other industries accelerates.

Driven by this demand, Ubuntu – the most popular Linux operating system for the Internet of Things (IoT)– is now available on the Manifold 2, a high-performance embedded computer offered by leading drone manufacturer, DJI. The Manifold 2 is designed to fit seamlessly onto DJI’s drone platforms via the onboard SDK and enables developers to transform aerial platforms into truly smarter drones, performing complex computing tasks and advanced image processing, which in-turn creates rapid flexibility for enterprise usage.

As part of the offering, the Manifold 2 is planning to feature snaps. Snaps are containerised software packages, designed to work perfectly across cloud, desktop, and IoT devices – with this the first instance of the technology’s availability on drones. The ability to add multiple snaps means a drone’s functionality can be altered, updated, and expanded over time. Depending on the desired use case, enterprises can ensure the form a drone is shipped in does not represent its final iteration or future worth.

Snaps also feature enhanced security and greater flexibility for developers. Drones can receive automatic updates in the field, which will become vital as enterprises begin to deploy large-scale fleets. Snaps also support roll back functionality in the event of failure, meaning developers can innovate with more confidence across this growing field.

Designed for developers, having the Manifold 2 pre-installed with Ubuntu means support for Linux, CUDA, OpenCV, and ROS. It is ideal for the research and development of professional applications, and can access flight data and perform intelligent control and data analysis. It can be easily mounted to the expansion bay of DJI’s Matrice 100, Matrice 200 Series V2 and Matrice 600, and is also compatible with the A3 and N3 flight controller.

DJI has now counted at least 230 people have been rescued with the help of a drone since 2013. As well as being used by emergency services, drones are helping to protect lives by eradicating the dangerous elements of certain occupations. Apellix is one such example; supplying drones which run on Ubuntu to alleviate the need for humans to be at the forefront of work in elevated, hazardous environments, such as aircraft carriers and oil rigs.

Utilising the freedom brought by snaps, it is exciting to see how developers drive the drone industry forward. Software is allowing the industrial world to move from analog to digital, and mission-critical industries will continue to evolve based on its capabilities.

The post Customisable for the enterprise: the next-generation of drones appeared first on Ubuntu Blog.

Canonical Design Team: New release: Vanilla framework 2.0

$
0
0

Over the past year, we’ve been working hard to bring you the next release of Vanilla framework: version 2.0, our most stable release to date.

Since our last significant release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version we’ve released.

You can see the full list of new and updated changes in the framework in the full release notes .

New to the Framework

Features

The release has too many changes to list them all here but we’ve outlined a list of the high-level changes below.

The first major change was removing the Shelves grid, which has been in the framework since the beginning, and reimplementing the functionality with CSS grid. A Native CSS solution has given us more flexibility with layouts. While working on the grid, we also upped the grid max width base value from 990px to 1200px, following trends in screen sizes and resolutions.

We revisited vertical spacing with a complete overhaul of what we implemented in our previous release. Now, most element combinations correctly fit the baseline vertical grid without the need to write custom styles.

To further enforce code quality and control we added a prettier dependency with a pre-commit hook, which led to extensive code quality updates following running it for the first time. And in regards to dependencies, we’ve added renovate to the project to help to keep dependencies up-to-date.

If you would like to see the full list of features you can look at our release notes, but below we’ve captured quick wins and big changes to Vanilla.

  • Added a script for developers to analyse individual patterns with Parker
  • Updated the max-width of typographic elements
  • Broke up the large _typography.scss file into smaller files
  • Standardised the naming of spacing variables to use intuitive (small/medium/large) naming where possible
  • Increased the allowed number of media queries in the project to 50 in the parker configuration
  • Adjusted the base font size so that it respects browser accessibility settings
  • Refactored all *.scss files to remove sass nesting when it was just being used to build class names – files are now flatter and have full class names in more places, making the searching of code more intuitive

Components and utilities

Two new components have been added to Vanilla in this release: `p-subnav` and `p-pagination`. We’ve also added a new `u-no-print` utility to exclude web-only elements from printed pages.

New components to the framework: Sub navigation and Pagination.

Removed deprecated components

As we extend the framework, we find that some of our older patterns are no longer needed or are used very infrequently. In order to keep the framework simple and to reduce the file size of the generated CSS, we try to remove unneeded components when we can. As core patterns improve, it’s often the case that overly-specific components can be built using more flexible base components.

  • p-link–strong: this was a mostly-unused link variant which added significant maintenance overhead for little gain
  • p-footer: this component wasn’t flexible enough for all needs and its layout is achievable with the much more flexible Vanilla grid
  • p-navigation–sidebar: this was not widely used and can be easily replicated with other components

Documentation updates

Content

During this cycle we improved content structure per component, each page now has a template with hierarchy and grouping of component styles, do’s and don’ts of using and accessible rules. With doing so we also updated the examples to showcase real use-case examples used across our marketing sites and web applications.

Updated Colorpage on our documentation site.

As well as updating content structure across all component pages, we also made other minor changes to the site listed below:

  • Added new documentation for the updated typographic spacing
  • Documented pull-quote variants
  • Merged all “code” component documentation to allow easier comparison
  • Changed the layout of the icons page

Website

In addition to framework and documentation content, we still managed to make time for some updates on vanillaframework.io, below is a list of high-level items we completed to users navigate when visiting our site:

  • Updated the navigation to match the rest of the website
  • Added Usabilla user feedback widget
  • Updated the “Report a bug” link
  • Updated mobile nav to use two dropdown menus grouped by “About” and “Patterns” rather than having two nav menus stacked
  • Restyled the sidebar and the background to light grey

Bug fixes

As well as bringing lots of new features and enhancements, we continue to fix bugs to keep the framework up-to-date. Going forward we plan to improve our release process by pushing our more frequent patch releases to help the team with bugs that may be blocking feature deliverables.

Getting Vanilla framework

To get your hands on the latest release, follow the getting started instuctions which include all options for using Vanilla.

The post New release: Vanilla framework 2.0 appeared first on Ubuntu Blog.

Ubuntu Podcast from the UK LoCo: S12E10 – Salamander

$
0
0

This week we’ve been playing with tiling window managers, we “meet the forkers”, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 10 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Alan has been playing with i3wm.
  • We “meet the forkers”; when projects end, forks are soon to follow.

  • We share a command line lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!
  • Image taken from Salamander arcade machine manufactured in 1986 by Konami.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Julian Andres Klode: Encrypted Email Storage, or DIY ProtonMail

$
0
0

In the previous post about setting up a email server, I explained how I setup a forwarder using Postfix. This post will look at setting up Dovecot to store emails (and provide IMAP and authentication) on the server using GPG encryption to make sure intruders can’t read our precious data!

Architecture

The basic architecture chosen for encrypted storage is that every incoming email is delivered to postfix via LMTP, and then postfix runs a sieve script that invokes a filter that encrypts the email with PGP/MIME using a user-specific key, before processing it further. Or short:

postfix --ltmp--> dovecot --sieve--> filter --> gpg --> inbox

Security analysis: This means that the message will be on the system unencrypted as long as it is in a Postfix queue. This further means that the message plain text should be recoverable for quite some time after Postfix deleted it, by investigating in the file system. However, given enough time, the probability of being able to recover the messages should reduce substantially. Not sure how to improve this much.

And yes, if the email is already encrypted we’re going to encrypt it a second time, because we can nest encryption and signature as much as we want! Makes the code easier.

Encrypting an email with PGP/MIME

PGP/MIME is a trivial way to encrypt an email. Basically, we take the entire email message, armor-encrypt it with GPG, and stuff it into a multipart mime message with the same headers as the second attachment; the first attachment is a control information.

Technically, this means that we keep headers twice, once encrypted and once decrypted. But the advantage compared to doing it more like most normal clients is clear: The code is a lot easier, and we can reverse the encryption and get back the original!

And when I say easy, I mean easy - the function to encrypt the email is just a few lines long:

def encrypt(message: email.message.Message, recipients: typing.List[str]) -> str:
    """Encrypt given message"""
    encrypted_content = gnupg.GPG().encrypt(message.as_string(), recipients)
    if not encrypted_content:
        raise ValueError(encrypted_content.status)

    # Build the parts
    enc = email.mime.application.MIMEApplication(
        _data=str(encrypted_content).encode(),
        _subtype='octet-stream',
        _encoder=email.encoders.encode_7or8bit)

    control = email.mime.application.MIMEApplication(
        _data=b'Version: 1\n',
        _subtype='pgp-encrypted; name="msg.asc"',
        _encoder=email.encoders.encode_7or8bit)
    control['Content-Disposition'] = 'inline; filename="msg.asc"'

    # Put the parts together
    encmsg = email.mime.multipart.MIMEMultipart(
        'encrypted',
        protocol='application/pgp-encrypted')
    encmsg.attach(control)
    encmsg.attach(enc)

    # Copy headers
    headers_not_to_override = {key.lower() for key in encmsg.keys()}

    for key, value in message.items():
        if key.lower() not in headers_not_to_override:
            encmsg[key] = value

    return encmsg.as_string()

Decypting the email is even easier: Just pass the entire thing to GPG, it will decrypt the encrypted part, which, as mentioned, contains the entire original email with all headers :)

def decrypt(message: email.message.Message) -> str:
    """Decrypt the given message"""
    return str(gnupg.GPG().decrypt(message.as_string()))

(now, not sure if it’s a feature that GPG.decrypt ignores any unencrypted data in the input, but well, that’s GPG for you).

Of course, if you don’t actually need IMAP access, you could drop PGP/MIME and just pipe emails through gpg --encrypt --armor before dropping them somewhere on the filesystem, and then sync them via ssh somehow (e.g. patching maildirsync to encrypt emails it uploads to the server, and decrypting emails it downloads).

Pretty Easy privacy (p≥p)

Now, we almost have a file conforming to draft-marques-pep-email-02, the Pretty Easy privacy (p≥p) format, version 2. That format allows us to encrypt headers, thus preventing people from snooping on our metadata!

Basically it relies on the fact that we have all the headers in the inner (encrypted) message. To mark an email as conforming to that format we just have to set the subject to p≥p and add a header describing the format version:

       Subject: =?utf-8?Q?p=E2=89=A1p?=
       X-Pep-Version: 2.0

A client conforming to p≥p will, when seeing this email, read any headers from the inner (encrypted) message.

We also might want to change the code to only copy a limited amount of headers, instead of basically every header, but I’m going to leave that as an exercise for the reader.

Putting it together

Assume we have a Postfix and a Dovecot configured, and a script gpgmymail written using the function above, like this:

def main() -> None:
    """Program entry"""
    parser = argparse.ArgumentParser(
        description="Encrypt/Decrypt mail using GPG/MIME")
    parser.add_argument('-d', '--decrypt', action="store_true",
                        help="Decrypt rather than encrypt")
    parser.add_argument('recipient', nargs='*',
                        help="key id or email of keys to encrypt for")
    args = parser.parse_args()
    msg = email.message_from_file(sys.stdin)

    if args.decrypt:
        sys.stdout.write(decrypt(msg))
    else:
        sys.stdout.write(encrypt(msg, args.recipient))


if __name__ == '__main__':
    main()

(don’t forget to add missing imports, or see the end of the blog post for links to full source code)

Then, all we have to is edit our .dovecot.sieve to add

filter "gpgmymail""myemail@myserver.example";

and all incoming emails are automatically encrypted.

Outgoing emails

To handle outgoing emails, do not store them via IMAP, but instead configure your client to add a Bcc to yourself, and then filter that somehow in sieve. You probably want to set Bcc to something like myemail+sent@myserver.example, and then filter on the detail (the sent).

Encrypt or not Encrypt?

Now do you actually want to encrypt? The disadvantages are clear:

  • Server-side search becomes useless, especially if you use p≥p with encrypted Subject.

    Such a shame, you could have built your own GMail by writing a notmuch FTS plugin for dovecot!

  • You can’t train your spam filter via IMAP, because the spam trainer won’t be able to decrypt the email it is supposed to learn from

There are probably other things I have not thought about, so let me know on mastodon, email, or IRC!

More source code

You can find the source code of the script, and the setup for dovecot in my git repository.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2019

$
0
0

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 214 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 17 hours (out of 14 hours allocated plus 10 extra hours from April, thus carrying over 7h to June).
  • Adrian Bunk did 0 hours (out of 8 hours allocated, thus carrying over 8h to June).
  • Ben Hutchings did 18 hours (out of 18 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated plus 0.25 extra hours from April, thus carrying over 0.25h to June).
  • Emilio Pozuelo Monfort did 33 hours (out of 18 hours allocated + 15.25 extra hours from April, thus carrying over 0.25h to June).
  • Hugo Lefeuvre did 18 hours (out of 18 hours allocated).
  • Jonas Meurer did 15.25 hours (out of 17 hours allocated, thus carrying over 1.75h to June).
  • Markus Koschany did 18 hours (out of 18 hours allocated).
  • Mike Gabriel did 23.75 hours (out of 18 hours allocated + 5.75 extra hours from April).
  • Ola Lundqvist did 6 hours (out of 8 hours allocated + 4 extra hours from April, thus carrying over 6h to June).
  • Roberto C. Sanchez did 22.25 hours (out of 12 hours allocated + 10.25 extra hours from April).
  • Sylvain Beucler did 18 hours (out of 18 hours allocated).
  • Thorsten Alteholz did 18 hours (out of 18 hours allocated).

Evolution of the situation

May was a calm month, nothing really changed compared to April, we are still at 214 hours funded by month. We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 34 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Jonathan Riddell: KDE.org Description Update

$
0
0

The KDE Applications website was a minimal possible change to move it from an unmaintained and incomplete site to a self-maintaining and complete site.  It’s been fun to see it get picked up in places like Ubuntu Weekly News, Late Night Linux and chatting to people in real life they have seen it get an update. So clearly it’s important to keep our websites maintained.  Alas the social and technical barriers are too high in KDE.  My current hope is that the Promo team will take over the kde-www stuff giving it communication channels and transparancy that doesn’t currently exist.  There is plenty more work to be done on kde.org/applications website to make it useful, do give me a ping if you want to help out.

In the mean time I’ve updated the kde.org front page text box where there is a brief description of KDE.  I remember a keynote from Aaron around 2010 at Akademy where he slagged off the description that was used on kde.org.  Since then we have had Visions and Missions and Goals and whatnot defined but nobody has thought to put them on the website.  So here’s the new way of presenting KDE to the world:

Thanks to Carl and others for review.

 


Ted Gould: Development in LXD

$
0
0

Most of my development is done in LXD containers. I love this for a few reasons. It takes all of my development dependencies and makes it so that they're not installed on my host system, reducing the attack surface there. It means that I can do development on any Linux that I want (or several). But it also means that I can migrate my development environment from my laptop to my desktop depending on whether I need more CPU or whether I want it to be closer to where I'm working (usually when travelling).

When I'm traveling I use my Pagekite SSH setup on a Raspberry Pi as the SSH gateway. So when I'm at home I want to connect to the desktop directly, but when away connect through the gateway. To handle this I set up SSH to connect into the container no matter where it is. For each container I have an entry in my .ssh/config like this:

Host container-name
    User user
    IdentityFile ~/.ssh/id_container-name
    CheckHostIP no
    ProxyCommand ~/.ssh/if-home.sh desktop-local desktop.pagekite.me %h

You'll notice that I use a different SSH key for each container. They're easy to generate and it is worth not reusing them, this is a good practice. Then for the ProxyCommand I have a shell script that'll setup a connection depending on where the container is running, and what network my laptop is on.

#!/bin/bashset-eCONTAINER_NAME=$3SSH_HOME_HOST=$1SSH_OUT_HOST=$2ROUTER_IP=$( ip route get to 8.8.8.8 | sed-n-e"s/.*via \(.*\) dev.*/\\1/p")ROUTER_MAC=$( arp -n${ROUTER_IP} | tail-1 | awk'{print $3}')HOME_ROUTER_MAC="▒▒:▒▒:▒▒:▒▒:▒▒:▒▒"IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"NC_COMMAND="nc -6 -q0"IP=$( bash -c"${IP_COMMAND}")if["${IP}"!=""];then# Localexec${NC_COMMAND}${IP} 22
fi

SSH_HOST=${SSH_OUT_HOST}if["${HOME_ROUTER_MAC}"=="${ROUTER_MAC}"];then
    SSH_HOST=${SSH_HOME_HOST}fi

IP=$(echo${IP_COMMAND} | ssh ${SSH_HOST} bash -l-s)exec ssh ${SSH_HOST}-- bash -l-c"\"${NC_COMMAND}${IP} 22\""

What this script does it that it first tries to see if the container is running locally by trying to find its IP:

IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"

If it can find that IP, then it just sets up nc command to connect to the SSH port on that IP directly. If not, we need to see if we're on my home network or out and about. To do that I check to see if the MAC address of the default router matches the one on my home network. This is a good way to check because it doesn't require sending additional packets onto the network or otherwise connecting to other services. To get the router's IP we look at which router is used to get to an address on the Internet:

ROUTER_IP=$( ip route get to 8.8.8.8 | sed-n-e"s/.*via \(.*\) dev.*/\\1/p")

We can then find out the MAC address for that router using the ARP table:

ROUTER_MAC=$( arp -n${ROUTER_IP} | tail-1 | awk'{print $3}')

If that MAC address matches a predefined value (redacted in this post) I know that it's my home router, else I'm on the Internet somewhere. Depending on which case I know whether I need to go through the proxy or whether I can connect directly. Once we can connect to the desktop machine, we can then look for the IP address of the container off of there using the same IP command running on the desktop. Lastly, we setup an nc to connect to the SSH daemon using the desktop as a proxy.

exec ssh ${SSH_HOST}-- bash -l-c"\"${NC_COMMAND}${IP} 22\""

What all this means so that I just type ssh contianer-name anywhere and it just works. I can move my containers wherever, my laptop wherever, and connect to my development containers as needed.

Jono Bacon: Conversations With Bacon: Kate Drane, Techstars

$
0
0

Kate Drane is a bit of an enigma. She helped launch hundreds of crowdfunding projects at Indiegogo (in fact, I worked with her on the Ubuntu Edge and Global Learning XPRIZE campaigns). She has helped connect hundreds of startups to expertise, capital, and customers at Techstars, and is a beer fan who co-founded a canning business called The Can Van.

There is one clear thread through her career: providing more efficient and better access for innovators, no matter what background they come from or what they want to create. Oh, and drinking great beer. She is fantastic and does great work.

In this episode of Conversations With Bacon we unpack her experiences of getting started in this work, her work facilitating broader access to information, funding, and people, what it was like to be at Indiegogo through the teenage years of crowdfunding, how she works to support startups, the experience of entrepreneurship from different backgrounds, and more.

The post Conversations With Bacon: Kate Drane, Techstars appeared first on Jono Bacon.

Bryan Quigley: Hack Computer review

$
0
0

I bought a hack computer for $299 - it's designed for teaching 8+ year olds programming. That's not my intended use case, but I wanted to support a Linux pre-installed vendor with my purchase (I bought an OLPC back in the day in the buy-one give-one program).

I only use a laptop for company events, which are usually 2-4 weeks a year. Otherwise, I use my desktop. I would have bought a machine with Ubuntu pre-installed if I was looking for more of a daily driver.

The underlying specs of the ASUS Laptop E406MA they sell are:

Unboxing and first boot

Unboxing

Included was an:

  • introduction letter to parents
  • tips (more for kids)
  • 2 pages of hack stickers
  • 2 hack pins
  • ASUS manual bits
  • A USB to Ethernet adapter
  • and the laptop:

Laptop in sleeveLaptop out of sleevefirst open

First boot takes about 20 seconds. And you are then dropped into what I'm pretty sure is GNOME Initial Setup. They also ask on Wifi connections if they are metered or not.

first open

There are standard philips head screws on the bottom of the laptop, but it wasn't easy to remove the bottom and I didn't want to push - I've been told there is nothing user replaceable within.

The BIOS

The options I'd like change are there, and updating the BIOS was easy enough from the BIOS (although no LVFS support..).

bios ez modebios advanced

A kids take

Keep in mind this review is done by 6 year old, while the laptop is designed for an 8+ year old.

He liked playing the art game and ball game. The ball game is an intro to the hack content. The art game is just Krita - see the artwork below. First load needed some help, but got the hang of the symmetrical tool.

He was able to install an informational program about Football by himself, though he was hoping it was a game to play.

AAAAAmy favoritewater color

Overall

For target market: It's really the perfect first laptop (if you want to buy new) with what I would generally consider the right trade-offs. Given Endless OS's ability to have great content pre-installed, I may have tried to go for a 128 GB drive. Endless OS is setup to use zram which will minimize RAM issues as much as possible. The core paths are designed for kids, but some applications are definitely not. It will be automatically updating and improving over time. I can't evaluate the actual Hack content whose first year is free, but after that will be $10 a month.

For people who want a cheap Linux pre-installed laptop: I don't think you can do better than this for $299.

Pros:

  • CPU really seems to be the best in this price range. A real Intel quad-core, but is cheap enough to have missed some of the vulernabities that have plaqued Intel (no HT).
  • Battery life is great
  • A 1080p screen

Cons:

  • RAM and disk sizes. Slow eMMC disk. Not upgradeable.
  • Fingerprint reader doesn't work today (and that's not part of their goal with the machine, it defaults to no password)
  • For free software purists, Trisquel didn't have working wireless or trackpad. The included usb->ethernet worked though.
  • Mouse can lack sensitivty at times
  • Ubuntu: I have had Wifi issues after suspend, but stopping and starting Wifi fixed them
  • Ubuntu: Boot times are slower than Endless
  • Ubuntu: Suspend sometimes loses the ability to play sound (gets stuck on headphones)

I do plan on investiaging the issues above and see if I can fix any of them.

Using Ubuntu?

My recommendations:

  • Purge rsyslog (may speed up boot time and reduces unnessary writes)
  • For this class of machine, I'd go deb only (remove snaps) and manual updating
  • Install zram-config
  • I'm currently running with Wayland and Chromium
  • If you don't want to use stock Ubuntu, I'd recommend Lubuntu.

Dive deeper

Stephen Michael Kellat: So That Happened...

$
0
0

I previously made a call for folks to check in on a net so I could count heads. It probably was not the most opportune timing but it was what I had available. You can listen to the full net at https://archives.anonradio.net/201906170000_sdfarc.mp3 and you'll find my after-net call to all Ubuntu Hams at roughly 44 minutes and 50 seconds into the recording.

This was a first attempt. The folks at SDF were perfectly fine with me making the attempt. The net topic for the night was "special projects" we happened to be undertaking.

Now you might wonder what I might be doing in terms of special projects. That bit is special. Sunspots are a bit non-existent at the moment so I have been fiddling around with listening for distant stations on the AM broadcast band which starts in the United States at 530 kHz and ends at 1710 kHz. From my spots in Ashtabula I end up hearing some fairly distant stations ranging from KYW 1060 in Philadelphia to WCBS 880 in New York City to WPRR 1680 in Ada, Michigan. When I am out driving Interstate Route 90 in the mornings during the winter I have had the opportunity to hear stations such as WSM 650 broadcasting from the vicinity of the Grand Old Opry in Nashville, Tennessee. One time I got lucky and heard WSB 750 out of Atlanta while driving when conditions were right.

These were miraculous feats of physics. WolframAlpha would tell you that the distance between Ashtabula and Atlanta is about 593 miles/955 kilometers. In the computing realm we work very hard to replicate the deceptively simple. A double-sideband non-suppressed carrier amplitude modulated radio signal is one of the simplest voice transmissions that can be made. The receiving equipment for such is often just as simple. For all the infrastructure it would take to route a live stream over a distance somewhat further than that between Derry and London proper, far less would be needed for the one-way analog signal.

Although there is Digital Audio Broadcasting across Europe we really still do not have it adopted across much of the United States. A primary problem is that it works best in areas with higher population density than we have in the USA. So far we have various trade names for IBOC, that is to say in-band on-channel, subcarriers giving us hybrid signals now. Digital-only IBOC has been tested at WWFD in Maryland and there was a proposal to the Federal Communications Commission to make a permanent rules change to make this possible. It appears in the American Experience, though, that the push is more towards Internet-connected products like iHeartRadio and Spotify rather than the legacy media outlets that has public service obligations as well as emergency alerting obligations.

I am someone who considers the Internet fairly fragile as evidenced most recently by the retailer Target having a business disaster through being unable to accept payments due to communications failures. I am not against technology advances, though. Keeping connections to the technological ways of old as well as sometimes having cash in the wallet as well as knowing how to write a check seem to be skills that are still useful in our world today.

Creative Commons License
So That Happened... by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Simos Xenitellis: How to run LXD containers in WSL2

$
0
0

Microsoft announced in May that the new version of Windows Subsystem for Linux 2 (WSL 2), will be running on the Linux kernel, itself running alongside the Windows kernel in Windows.

In June, the first version of WSL2 has been made available as long as you update your Windows 10 installation to the Windows Insider program, and select to receive the bleeding edge updates (fast ring).

In this post we are going to see how to get LXD running in WSL2. In a nutshell, LXD does not work out of the box yet, but LXD is versatile enough to actually make it work even when the default Linux kernel in Windows is not fully suitable yet.

Prerequisites

You need to have Windows 10, then join the Windows Insider program (Fast ring).

Then, follow the instructions on installing the components for WSL2 and switching your containers to WSL2 (if you have been using WSL1 already).

Install the Ubuntu container image from the Windows Store.

At the end, when you run wsl in CMD.exe or in Powershell, you should get a Bash prompt.

The problems

We are listing here the issues that do not let LXD run out of the box. Skip to the next section to get LXD going.

In WSL2, there is a modified Linux 4.19 kernel running in Windows, inside Hyper-V. It looks like this is a cut-down/optimized version of Hyper-V that is good enough for the needs of Linux.

The Linux kernel in WSL2 has a specific configuration, and some of the things that LXD needs, are missing. Specifically, here is the output of lxc-checkconfig.

ubuntu@DESKTOP-WSL2:~$ lxc-checkconfig
 --- Namespaces ---
 Namespaces: enabled
 Utsname namespace: enabled
 Ipc namespace: enabled
 Pid namespace: enabled
 User namespace: enabled
 Network namespace: enabled

--- Control groups ---
 Cgroups: enabled

--- Control groups ---
 Cgroups: enabled

Cgroup v1 mount points:
 /sys/fs/cgroup/cpuset
 /sys/fs/cgroup/cpu
 /sys/fs/cgroup/cpuacct
 /sys/fs/cgroup/blkio
 /sys/fs/cgroup/memory
 /sys/fs/cgroup/devices
 /sys/fs/cgroup/freezer
 /sys/fs/cgroup/net_cls
 /sys/fs/cgroup/perf_event
 /sys/fs/cgroup/hugetlb
 /sys/fs/cgroup/pids
 /sys/fs/cgroup/rdma

Cgroup v2 mount points:

 Cgroup v1 systemd controller: missing
 Cgroup v1 clone_children flag: enabled
 Cgroup device: enabled
 Cgroup sched: enabled
 Cgroup cpu account: enabled
 Cgroup memory controller: enabled
 Cgroup cpuset: enabled

--- Misc ---
 Veth pair device: enabled, not loaded
 Macvlan: enabled, not loaded
 Vlan: missing
 Bridges: enabled, not loaded
 Advanced netfilter: enabled, not loaded
 CONFIG_NF_NAT_IPV4: enabled, not loaded
 CONFIG_NF_NAT_IPV6: enabled, not loaded
 CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
 CONFIG_IP6_NF_TARGET_MASQUERADE: missing
 CONFIG_NETFILTER_XT_TARGET_CHECKSUM: missing
 CONFIG_NETFILTER_XT_MATCH_COMMENT: missing
 FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
 checkpoint restore: enabled
 CONFIG_FHANDLE: enabled
 CONFIG_EVENTFD: enabled
 CONFIG_EPOLL: enabled
 CONFIG_UNIX_DIAG: enabled
 CONFIG_INET_DIAG: enabled
 CONFIG_PACKET_DIAG: enabled
 CONFIG_NETLINK_DIAG: enabled
 File capabilities:

Note : Before booting a new kernel, you can check its configuration
 usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

ubuntu@DESKTOP-WSL2:~$           

The systemd-related mount point is OK in the sense that currently systemd does not work anyway in WSL (either WSL1 or WSL2). At some point it will get fixed in WSL2, and there are pending issues on this at Github. Talking about systemd, we cannot use yet the snap package of LXD because snapd depends on systemd. And no snapd, means no snap package of LXD.

The missing netfilter kernel modules mean that we cannot use the managed LXD network interfaces (the one with default name lxdbr0). If you try to create a managed network interface, you will get the following error.

Error: Failed to create network 'lxdbr0': Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables: No chain/target/match by that name.

For completeness, here is the LXD log. Notably, AppArmor is missing from the Linux kernel and there was no CGroup network class controller.

ubuntu@DESKTOP-WSL2:~$ cat /var/log/lxd/lxd.log
 t=2019-06-17T10:17:10+0100 lvl=info msg="LXD 3.0.3 is starting in normal mode" path=/var/lib/lxd
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Configured LXD uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="AppArmor support has been disabled because of lack of kernel support"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="Couldn't find the CGroup network class controller, network limits will be ignored."
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel features:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - netnsid-based network retrieval: no"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - unprivileged file capabilities: yes"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Initializing local database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Starting /dev/lxd handler:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock
 t=2019-06-17T10:17:14+0100 lvl=info msg="REST API daemon:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing global database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing storage pools"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing networks"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Loading daemon configuration"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating instance types"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating instance types"
 ubuntu@DESKTOP-WSL2:~$                     

Having said all that, let’s get LXD working.

Configuring LXD on WSL2

Let’s get a shell into WSL2.

C:\> wsl
ubuntu@DESKTOP-WSL2:~$

The aptpackage of LXD is already available in the Ubuntu 18.04.2 image, found in the Windows Store. However, the LXD service is not running by default and we will to start it.

ubuntu@DESKTOP-WSL2:~$ sudo service lxd start

Now we can run sudo lxd initto configure LXD. We accept the defaults (btrfs storage driver, 50GB default storage). But for networking, we avoid creating the local network bridge, and instead we configure LXD to use an existing bridge. The existing bridge configures macvlan, which avoids the error, but macvlan does not work yet anyway in WSL2.

ubuntu@DESKTOP-WSL2:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=50GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: eth0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
 config: {}
 networks: []
 storage_pools:
 - config:
     size: 50GB
   description: ""
   name: default
   driver: btrfs
 profiles:
 - config: {}
   description: ""
   devices:
   eth0:
     name: eth0
     nictype: macvlan
     parent: eth0
     type: nic
   root:
     path: /
     pool: default
     type: disk
   name: default
 cluster: null 

ubuntu@DESKTOP-WSL2:~$

For some reason, LXD does not manage to mount sysfor the containers, therefore we need to perform this ourselves.

ubuntu@DESKTOP-WSL2:~$ sudo mkdir /usr/lib/x86_64-linux-gnu/lxc/sys
ubuntu@DESKTOP-WSL2:~$ sudo mount sysfs -t sysfs /usr/lib/x86_64-linux-gnu/lxc/sys

The containers will not have direct Internet connectivity, therefore we need to use a Web proxy. In our case, it suffices to use privoxy. Let’s install it. privoxy uses by default the port 8118, which means that if the containers can somehow get access to port 8118 on the host, they get access to the Internet!

ubuntu@DESKTOP-WSL2:~$ sudo apt update
...
ubuntu@DESKTOP-WSL2:~$ sudo apt install -y privoxy

Now, we are good to go! In the following we create a container with a Web server, and view it using Internet Explorer. Yes, IE has two uses, 1. to download Firefox, and 2. to view the Web server in the LXD container as evidence that all these are real.

Setting up a Web server in a LXD container in WSL2

Let’s create our first container, running Ubuntu 18.04.2. It does not get an IP address from the network because macvlan is not working. The container has no Internet connectivity!

ubuntu@DESKTOP-WSL2:~$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

ubuntu@DESKTOP-WSL2:~$ lxc list
+-------------+---------+------+------+------------+-----------+
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------+---------+------+------+------------+-----------+
| mycontainer | RUNNING |      |      | PERSISTENT | 0         |
+-------------+---------+------+------+------------+-----------+

ubuntu@DESKTOP-WSL2:~$

The container has no Internet connectivity, so we need to give it access to port 8118 on the host. But how can we do that, if the container does not have even network connectivity with the host? We can do this using a LXD proxy device. Run the following on the host. The command creates a proxy device called myproxy8118 that proxies the TCP port 8118 between the host and the container (the binding happens in the container because the port already exists on the host).

ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy8118 proxy listen=tcp:127.0.0.1:8118 connect=tcp:127.0.0.1:8118 bind=container
Device myproxy8118 added to mycontainer

ubuntu@DESKTOP-WSL2:~$

Now, get a shell in the container and configure the proxy!

ubuntu@DESKTOP-WSL2:~$ lxc exec mycontainer bash
root@mycontainer:~# export http_proxy=http://localhost:8118/
root@mycontainer:~# export https_proxy=http://localhost:8118/

It’s time to install and start nginx!

root@mycontainer:~# apt update
...
root@mycontainer:~# apt install -y nginx
...
root@mycontainer:~# service nginx start

nginx is installed. For a finer touch, let’s edit a bit the default HTML file of the Web server so that it is evident that the Web server runs in the container. Add some text you think suitable, using the command

root@mycontainer:~# nano /var/www/html/index.nginx-debian.html

Up to now, there is a Web server running in the container. This container is not accessible by the host and obviously by Windows either. So, how can we view the website from Windows? By creating an additional proxy device. The command creates a proxy device called myproxy80 that proxies the TCP port 80 between the host and the container (the binding happens on the host because the port already exists in the container).

root@mycontainer:~# logout
ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 bind=host

Finally, find the IP address of your WLS2 Ubuntu host (hint: use ifconfig) and connect to that IP using your Web browser.

Conclusion

We managed to install LXD in WSL2 and got a container to start. Then, we installed a Web server in the container and viewed the page from Windows.

I hope future versions of WSL2 will be more friendly to LXD. In terms of the networking, there is need for more work to make it work out of the box. In terms of storage, btrfs is supported (over a loop file) and it is fine.

Viewing all 17727 articles
Browse latest View live