Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Alessio Treglia: Cosmos Hub and Reproducible Builds

$
0
0

Open source software allows us to build trust in a distributed, collaborative software development process, to know that the software behaves as expected and is reasonably secure. But the benefits of open source are strongest for those who directly interact with the source code. These people can use a computer which they trust to compile the source code into an operational version for themselves. Distributing binaries of open source software breaks this trust model, and reproducible builds restores it.

Tendermint Inc is taking the first steps towards a trustworthy binary distribution process. Our investment in reproducible builds makes doing binary distributions of the gaia software a possibility. We envision that the Cosmos Hub community will be our partners in building trust in this process. The governance features of the Cosmos Hub will enable a novel collaboration between Tendermint and that validator community to release only binaries that can be trusted by anyone.

Here is our game plan.

The release of the cosmoshub-3 will support our new reproducible build process. Tendermint developers will make a governance proposal with the hashes of all supported binaries. We will ask ATOM holders to reproduce the builds on computers they control and vote YES if the hashes match.

If the proposal passes, we will make the binaries available here via Github.

The benefits of reproducible builds

Gaia reproducible binaries then bring many significant advantages to developers and end users:

  • Build sanity — the guarantee that the gaia suite can always be built from sources.
  • Enable third-parties to independently verify executables to ensure that no vulnerabilities were introduced at build time.
  • Large body of independent builders can eventually come to consensus on the correct reproducible binary output and protect themselves from targeted attacks.

How to verify that gaia binaries correspond to a repository snapshot

The gaia repository comes with the required tooling to build both server and client applications deterministically. First you need to clone https://github.com/cosmos/gaia and checkout the release branch or the commit you want to produce the binaries from. For instance, if you intend to build and sign reproducible binaries for all supported platforms of gaia’s master branch, you may want to do the following:

git clone https://github.com/cosmos/gaia&& cd gaia
chmod +x contrib/gitian-build.sh
./contrib/gitian-build.sh -s email@example.com all

Append the -c flag to the above command if you want to upload your signature to the http://github.com/gaia/gaia.sigs repository as well.

If you want to build the binaries only without signing the build result, just type:

./contrib/gitian-build.sh all

Further information can be found here: github.com/cosmos/gaia/…/docs/reproducible-builds.md

References

Credits

Co-authored with Zaki Manian


The Fridge: Ubuntu Weekly Newsletter Issue 585

$
0
0

Welcome to the Ubuntu Weekly Newsletter, Issue 585 for the week of June 23 – 29, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Iain Lane: Canonical’s Desktop Team is hiring

$
0
0

Join the desktop team

Some good news for anyone who might read this. In the Canonical desktop team we’re hiring a new Software Engineer.

More details in the job description, but if you’re looking for an opportunity that lets you:

  • work remotely
  • work on GNOME and related desktop technologies (both upstream and downstream!)
  • help to ship a solid Ubuntu every 6 months
  • work with smart people
  • have the opportunity to travel to, and present at, conferences and internal events

then please apply. You do not need to already be a GNOME or a Ubuntu/Debian expert to apply for this position – you’ll be given a mentor and plenty of time and support to learn the ropes.

Please feel free to contact me on IRC (Laney on all the best networks) / email (iain.lane@canonical.com) / Telegram (@lan3y) if you’re considering an application and you’d like to chat about it.

Full Circle Magazine: Full Circle Weekly News #137

$
0
0

Raspberry Pi 4 On Sale Now from $35
https://www.raspberrypi.org/blog/raspberry-pi-4-on-sale-now-from-35/

LibreELEC 9.2 ALPHA1 [Ships] with Raspberry Pi 4B Support
https://libreelec.tv/2019/06/libreelec-9-2-alpha1-rpi4b/

Ubuntu Decides to Keep Supporting Selected 32-bit LIbs After Developer Outrage
https://itsfoss.com/ubuntu-19-10-drops-32-bit-support/

openSUSE Leap 42.3 Linux OS Reached End of Life

https://news.softpedia.com/news/opensuse-leap-42-3-linux-os-reaches-end-of-life-upgrade-to-opensuse-leap-15-now-526565.shtml

KDE Plasma 5.16.2 Desktop Environment Released with More Than 30 Bug Fixes
https://news.softpedia.com/news/kde-plasma-5-16-2-desktop-environment-released-with-more-than-30-bug-fixes-526523.shtml

Thousands of IoT Devices Bricked by Silex Malware
https://threatpost.com/thousands-of-iot-devices-bricked-by-silex-malware/146065/

Credits:
Ubuntu “Complete” sound: Canonical
 
Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

Simos Xenitellis: Reconnecting your LXD installation to the ZFS storage pool

$
0
0

You are using LXD and you are creating many containers. Those containers are stored in a dedicated ZFS pool, and LXD is managing this ZFS pool exclusively. But disaster strucks, and LXD loses its database and forgets about your containers. Your data is there in the ZFS pool, but LXD has forgotten them because its configuration (database) has been lost.

In this post we see how to recover our containers when the LXD database is, for some reason, gone.

This post expands on the LXD disaster recovery documentation.

How to lose your LXD configuration database

How could you have lost your LXD database?

You have a working installation of LXD and you have uninstalled LXD by accident. Normally, there should be some copy of the database lying around which could make the recovery much easier. In my case, I have been running an instance of LXD from the edge channel (snap package) and after some time, LXD would get stuck and not work. LXD would not start and the lxc commands would get stuck without giving any output. Therefore, I switched to the stable channel (default) and the configuration database was gone. lxc list would work, but show an empty list.

Prerequisites

In this post we cover the case where your storage pool is intact but LXD has forgotten all about your containers, your profiles, your network interfaces, and, of course, your storage pool.

You should get the appropriate output with zfs list. Like this.

$ sudo zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
 lxd                        78,4G   206G    24K  /var/snap/lxd/common/lxd/storage-pools/lxd
 lxd/containers             73,1G   206G    24K  /var/snap/lxd/common/lxd/storage-pools/lxd/containers
 lxd/containers/mycontainer  486M   206G   816M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer
...

But the lxc commands return empty.

$ lxc storage list
 +------+-------------+--------+--------+---------+
 | NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
 +------+-------------+--------+--------+---------+
 +------+-------------+--------+--------+---------+
$ lxc profile list
 +-----------+---------+
 |   NAME    | USED BY |
 +-----------+---------+
 +-----------+---------+

What happened?

First, LXD lost the connection to the storage pool. There is no information as to where is the ZFS pool. We need to give that information to LXD.

Second, while LXD lost all configuration, each container has a backup of its own configuration in a file backup.yaml, stored in the storage pool. Therefore, you can sudo lxd import (Note: it is lxd import, not lxc import) to add back each container. If a custom profile, or network interface is missing, you will get an appropriate message to act on it.

How do we recover?

First, we make a list of the container names. It is quite possible you can get the list from /var/snap/lxd/common/lxd/storage-pools/lxd/containers/.

$ ls /var/snap/lxd/common/lxd/storage-pools/lxd/containers/
mycontainer
...

Second, mount each container. We run zfs mount and specify the ZFS part only. The mount point is somehow known already to ZFS.

$ sudo zfs mount lxd/containers/mycontainer
$ zfs mount
lxd/containers/mycontainer   /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer

Finally, run lxd import to import the container. You may get an error; see the troubleshooting section below, and then try to import again.

$ sudo lxd import mycontainer

By doing so, we can now start the container.

$ lxc start mycontainer

Troubleshooting

Error: Create container: Requested profile ‘gui’ doesn’t exist

You get this error if the profile with name gui does not exist. Create the profile and run lxd import again.

Error: Create container: Invalid devices: Not an IP address: localhost

This relates to a change in LXD (appearing in LXD 3.13) and proxy devices. See more at this post.

Error: Storage volume for container “mycontainer” already exists in the database. Set “force” to overwrite

There is already a container in LXD with the same name. Most likely you got this if you already imported the container. Because if not, you need to figure out which one to keep.

Canonical Design Team: Faster snap development – additional tips and tricks

$
0
0

Over the last few months, we published several blog posts, aimed at helping developers enjoy a smoother, faster, more streamlined experience creating snaps. We discussed the tools and tricks you can employ in snapcraft to accelerate the speed at which you iterate on your builds.

We want to continue the work presented in the Make your snap development faster tutorial, by giving you some fresh pointers and practical tips that will make the journey even brisker and snappier than before.

You shall not … multipass

Multipass is a cross-platform tool used to launch and manage virtual machines on Windows, Mac and Linux. Behind the scenes, snapcraft uses multipass to setup a clean, isolated build environment inside which your snaps will be created. Multipass leverages KVM (qemu) to create virtual machine instances. While this is handy when running natively on a host, this approach is not reliable for nested virtual machines or systems with limited KVM support.

Indeed, if you are running snapcraft inside a VM that does not support hardware acceleration passthrough, you’re running on a host with a CPU that does not support hardware acceleration, hardware acceleration is disabled in BIOS, or KVM modules are not loaded into memory, you will most likely see the following error:

failed to launch: Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory

If you encounter this problem – which can happen if you’re using Linux as a virtual machine, a setup that is quite popular with a large number of developers, this means that you cannot use multipass for your builds. However, snapcraft supports several other clever methods that will let you successfully create your snaps in a safe, isolated manner.

Use LXD

You can run snapcraft with the LXD backend. This requires that you have the LXD software installed and configured on your system. Start by installing and configuring the tool:

snap install lxd --classic
lxd init

To verify that LXD has been set up correctly, you can start an Ubuntu 18.04 container instance and open a shell inside it:

lxc launch ubuntu:18.04 test-instance
lxc exec test-instance -- /bin/bash

You will now have a minimal Ubuntu installation. You can exit and destroy the container, or continue working inside it. For instance, you can use it now to setup snapcraft, installation additional software you may need, as well as copy any assets, like your project files and source code, into the container.

Outside the container environment, if you want to invoke snapcraft with LXD, you can run snapcraft with the –use-lxd flag:

snapcraft --use-lxd

Destructive mode & manual container setup

By default, snapcraft uses multipass to start virtual machine instances and run the build inside them. This mode will not work when snapcraft is invoked inside the container environment. To that end, snapcraft needs to be invoked with the –destructive-mode argument. Please note that this feature is intended for short-lived CI systems, as the build will install packages on the (virtual) host, may include existing files from the host and could thus be unclean

snapcraft --destructive-mode

In this case, the full sequence of a manual container setup would include the following steps:

  • Manually start a container (lxc launch).
  • Copy your snapcraft.yaml into the container (lxc file push).
  • Open an interactive shell (lxc exec).
  • Inside the container, run snapcraft–destructive-mode flag. Please be extra careful and make sure that you run this command in the right shell, so you don’t accidentally do this on your host system. You may end up retrieving various packages and libraries that could potentially conflict with your setup.
  • Once you have successfully completed the build, you can retrieve the snap from inside the container (lxc pull).
  • Stop and/or destroy the container instance (lxc stop).

Shell after

In the tutorial linked above, we talked about the debug flag, which lets you step into the build environment on failure, allowing you to examine the system and understand better what might have gone wrong. Similarly, you can step into the virtual machine or container upon successful build using the –shell-after flag.

snapcraft --shell-after

You also have the option to run your snaps with the –shell flag. This can be useful in troubleshooting runtime issues, like missing libraries, permissions – or other errors that you have encountered while reviewing your snap before pushing to the store. Alongside the try and pack commands, which we examined last week, you get a great deal of flexibility in nailing down issues and bugs during the development phase.

Summary

If you’ve ever raced a car, you know the best lap times aren’t decided by straight line dashes, they are decided by how fast you go through corners. Slow in, fast out. This article comes with a handful of useful, advanced tricks – the ability to use different provisioning backends, the destructive mode and the after-build shell. These should help you enjoy higher, faster productivity creating snaps. If you have any feedback or questions on this topic, please join our forum for a discussion.

No iconic Lord of the Rings phrases were harmed in the writing of this article.

Photo by Marco Bicca on Unsplash.

The post Faster snap development – additional tips and tricks appeared first on Ubuntu Blog.

Ubuntu Podcast from the UK LoCo: S12E13 – Prince of Persia

$
0
0

This week we’ve been giving talks and spending 8 and a half years becoming a Doctor of Philosophy. We discuss 32-bit Intel packages in Ubuntu, the Eoan Ermine wallpaper competition, Mir still not being dead, the new Snap Store, some jobs you might want to apply for, UbuCon Europe, Oggcamp, the new Raspberry Pi 4 and round up some headlines from the tech world.

It’s Season 12 Episode 13 of the Ubuntu Podcast! Mark Johnson, Martin Wimpress and Laura Cowen are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Podcast Ubuntu Portugal: Ep 58 – Tentações

$
0
0

Neste episódio tivemos a participação do João Jotta a voz das reviews de jogos no podcast LinuxTechPT, que veio falar-nos das Conversas Livres do LinuxTechPT, voltamos a falar da Creative Commons Summit 2019, sobre o Slimbook Zero, Ceph, as boards PCEngines, novidadades no Mir, a mais recente falha de segurança no LXD, a crise Huawey-Google, as novas workstations Dell, Ubuntu 19.10 Eon Ermine, Unity 3D e o Call for Papers da Ubucon Europe 2019.

  • https://summit.creativecommons.org/
  • https://slimbook.es/zero-smart-thin-client-linux-windows-fanless
  • https://pcengines.ch/
  • https://www.packtpub.com/eu/virtualization-and-cloud/mastering-ceph-second-edition
  • https://linuxtech.pt/
  • https://discourse.ubuntu.com/t/mir-1-2-0-release/11034
  • https://shenaniganslabs.io/2019/05/21/LXD-LPE.html
  • https://bartongeorge.io/2019/05/29/ladies-and-gentlemen-introducing-the-dell-precision-5540-7540-and-7540-developer-editions/
  • https://www.forbes.com/sites/jasonevangelho/2019/05/24/ubuntu-19-10-nvidia-proprietary-gpu-driver-iso-linux/#3634deba711a
  • https://blogs.unity3d.com/2019/05/30/announcing-the-unity-editor-for-linux/
  • https://sintra2019.ubucon.org/call-for-papers-announcement/

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Outra forma de nos apoiarem é usarem os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dollares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dollares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Sugestão de bundle:
https://www.humblebundle.com/books/open-source-bookshelf?partner=pup

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

“Australia dingo”by llee_wu is licensed under CC BY-ND 2.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.


Canonical Design Team: Ubuntu updates for TCP SACK Panic vulnerabilities

$
0
0
Ubuntu updates for TCP SACK Panic vulnerabilities

Issues have been identified in the way the Linux kernel’s TCP implementation processes Selective Acknowledgement (SACK) options and handles low Maximum Segment Size (MSS) values. These TCP SACK Panic vulnerabilities could expose servers to a denial of service attack, so it is crucial to have systems patched.

Updated versions of the Linux kernel packages are being published as part of the standard Ubuntu security maintenance of Ubuntu releases 16.04 LTS, 18.04 LTS, 18.10, 19.04 and as part of the extended security maintenance for Ubuntu 14.04 ESM users.

It is recommended to update to the latest kernel packages and consult Ubuntu Security Notices for further updates.

Ubuntu Advantage for Infrastructure subscription customers can find the latest status information in our Knowledge Base and file a support case with Canonical support for any additional questions or concerns around SACK Panic.

Canonical’s Kernel Livepatch updates for security vulnerabilities related to TCP SACK processing in the Linux kernel have been released and are described by CVEs 2019-11477 and 2019-11478, with details of the patch available in LSN-0052-1.

These CVEs have a Livepatch fix available, however, a minimum kernel version is required for Livepatch to install the fix as denoted by the table in LSN-0052-1, reproduced here:

| Kernel                   | Version | flavors           |
|--------------------------+----------+--------------------------|
| 4.4.0-148.174            | 52.3 | generic, lowlatency      |
| 4.4.0-150.176            | 52.3 | generic, lowlatency      |
| 4.15.0-50.54             | 52.3 | generic, lowlatency      |
| 4.15.0-50.54~16.04.1     | 52.3 | generic, lowlatency      |
| 4.15.0-51.55             | 52.3 | generic, lowlatency      |
| 4.15.0-51.55~16.04.1     | 52.3 | generic, lowlatency      |

Livepatch fixes for CVEs 2019-11477 and 2019-11478 are not available for prior kernels, and an upgrade and reboot to the appropriate minimum version is necessary. These kernel versions correspond to the availability of mitigations for the MDS series of CVEs (CVE-2018-12126, CVE-2018-12127, CVE-2018-12130 and CVE-2019-11091).

Additionally, a third SACK related issue, CVE-2019-11479, does not have a Livepatch fix available because it is not technically feasible to apply the changes via Livepatch. Mitigation information is available at the Ubuntu Security Team Wiki.

If you have any questions and want to learn more about these patches, please do not hesitate to get in touch.

The post Ubuntu updates for TCP SACK Panic vulnerabilities appeared first on Ubuntu Blog.

Lubuntu Blog: Lubuntu Eoan Ermine Wallpaper Contest

$
0
0
The Lubuntu Team is pleased to announce we are running an Eoan Ermine wallpaper competition, giving you, our community, the chance to submit, and get your favorite wallpapers included in the Lubuntu 19.10 release. Show your artwork To enter, simply post your image into this thread. We will close this thread towards the beginning of […]

Stephen Michael Kellat: Early July Quick Bits

$
0
0

I can't quite sleep at the moment. Recapping the past week might be a good idea. In no particular order:

Canonical Design Team: Analyze ACPI Tables in a Text File with FWTS

$
0
0

I often need to implement tests for new ACPI tables before they become available on real hardware. Fortunately, FWTS provides a framework to read ACPI tables’ binary.

The below technique is especially convenient for ACPI firmware and OS kernel developers. It provides a simple approach to verifying ACPI tables without compiling firmware and deploying it to hardware.

Using acpidump.log as an input for FWTS

The command to read ACPI tables’ binary is

# check ACPI methods in a text file
$ fwts method --dumpfile=acpidump.log

or

# check ACPI FACP table in a text file
$ fwts facp --dumpfile=acpidump.log

where acpidump.log contains ACPI tables’ binary in a specific format as depicted below:

Format of acpidump
  • Table Signature – the 4-byte long ACPI table signature
  • Offset – data starts from 0000 and increases by 16 bytes per line
  • Hex Data- each line has 16 hex integers of the compiled ACPI table
  • ASCII Data – the ASCII presentation of the hex data

This format may look familiar because it is not specific to FWTS. In fact, it is the same format generated by acpidump. In other words, the below two code snippets generate identical results.

# reading ACPI tables from memory
$ sudo fwts method
# dumping ACPI tables and testing it
$ sudo acpidump > acpidump.log
$ fwts method --dumpfile=acpidump.log

For developers, using –dumpfile option means that it is possible to test ACPI tables before deploying them on real hardware. The following sections present how to prepare a customized log file.

Using a customized acpidump.log for FWTS

We can use acpica-tools to generate an acpidump.log. The following is an example of building a customized acpidump.log to test the fwts method command.

Generate a dummy FADT

A Fixed ACPI Description Table (FADT) contains vital information to ACPI OS such as the base addresses for hardware registers. As a result, FWTS requires a FADT in an acpidump.log so it can recognize acpidump.log as a valid input file.

$ iasl -T FACP
$ iasl facp.asl > /dev/zero
$ echo "FACP @ 0x0000000000000000">> acpidump.log
$ xxd -c 16 -g 1 facp.aml  | sed 's/^0000/    /' >> acpidump.log
$ echo "">> acpidump.log

Develop a Customized DSDT table

A Differentiated System Description Table (DSDT) is designed to provide OEM’s value-added features. A dummy DSDT can be generated as below, and OEM value-added features, such as ACPI battery or hotkey for airplane mode, can be added to it.

# Generate a DSDT
$ iasl -T DSDT
# Customize dsdt.asl
#  ex. adding an ACPI battery or airplane mode devices

Compile the DSDT table to binary

The customized DSDT can be compiled and appended to acpidump.log.

$ iasl dsdt.asl > /dev/zero
$ echo "DSDT @ 0x0000000000000000">> acpidump.log
$ xxd -c 16 -g 1 dsdt.aml  | sed 's/^0000/    /' >> acpidump.log
$ echo "">> acpidump.log

Run method test with acpidump.log

And finally, run the fwts method test.

$ fwts method --dumpfile=acpidump.log

Final Words

While we use DSDT as an example, the same technique applies to all ACPI tables. For instance, HMAT was introduced and frequently updated in recent ACPI specs, and the Firmware Test Suite includes most, if not all, changes up-to-date. As a consequence, FWTS is able to detect errors before firmware developers integrate HMAT into their projects, and therefore reduces errors in final products.

The post Analyze ACPI Tables in a Text File with FWTS appeared first on Ubuntu Blog.

Ubucon Europe 2019: Call For Volunteers

$
0
0
Hi

We are 4 months away from the Ubucon Europe 2019 meeting. It is very important to announce this event that will be held in Sintra on 10, 11, 12 and 13 October. From now on, you can contribute to the dissemination of the meeting and as such, share this information with the people you know in order to get further.

As with any event, it is very important that volunteers take part in the various tasks that precede it, as well as during and after the event. The tasks are diverse and you can find some information in this link.

Your participation will contribute to our success and to transmit an image of cooperation and organization in this great mission that we undertake to fulfill.

Volunteer by filling out the following form or using the embed form we have in this post.

Daniel Pocock: Arrival at CommCon 2019

$
0
0

Last night I arrived at CommCon 2019 in Latimer, Buckinghamshire, a stone's throw from where I used to live in St Albans, UK. For many of you it is just a mouseclick away thanks to online streaming.

It is a residential conference with many of the leaders in the free and open source real-time communications and telephony ecosystem, together with many users and other people interested in promoting free, private and secure communications.

On Wednesday I'll be giving a talk about packaging and how it relates to RTC projects, given my experience in this domain as a Fedora, Ubuntu and Debian Developer.

David Duffet, author of Let the Geek Speak, gave the opening keynote, discussing the benefits and disadvantages of free, open source software in telecommunications. This slide caught my attention:

where he talks about the burden of

ruthless ungrateful expectations for continued service and evolution

on developers and volunteers. This reminded me of some of the behaviour recently documented on my blog.

CommCon organizers and sponsors, however, have found far more effective ways to motivate people: welcome gifts:

There is some great wildlife too:

Canonical Design Team: Design and Web team summary – 8 July 2019

$
0
0

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.  Here are a few highlights of completed work.  We also moved several other projects forward that we will share in coming weeks.

Takeover and landing page for Edge month

A series of four webinars which will explains how edge computing provides enterprises with real-time data processing, cost-effective security and scalability.

Learn more about edge month

OpenStack content updates

Each week the team reviews how our content is performing in a section of the site.  This week’s focus was OpenStack. The result was around ten changes to the section.

MAAS

The MAAS squad develop the UI for the maas project.

Implementation of small graphs for KVM listing

Reorganization of table columns and per KVM actions added to each row. in the new KVM page. We have also added new mini in table charts with popovers showing data for storage, RAM and CPU per KVM.

Settings fresh navigation wireframes 

The settings in MAAS is due a revamp and as part of the preparation for this work the UX team have been testing different settings navigation structures and layouts. This has been wireframed and will be designer in the next few weeks. So stay tuned for the latest and greatest in settings navigation.

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Design to disabled for k8s deployments in the GUI

Juju team is working on the production of Kubernetes charms and bundles in order to deploy with Juju on Kubernetes clusters (beta). GUI-wise, this functionality is not yet supported, and we will redirect users to the relevant pages on the documentation.

Prepare initial engagement

The design team is getting in touch with our customers and users through different media. We are setting up regular meet-ups, defining the initial work with marketing for newsletter and email content, and assist to working sessions with Juju users.

This user research phase is particularly important to gather feedback and information for the new products we are working on for Juju and JAAS, and to be able to collect users’ needs and requests for the current products.

Research, UX and explorations around organisation, models and controllers

We conducted a research around organisation and enterprise structures applied to Juju model and JIMM controller to define the hierarchy of teams and users (RBAC and IM).

Suru designs applied to jaas.ai

The JAAS team has been working on applying the new suru style design to the jaas.ai website. This week this has been implemented and will be live very shortly. This redesign introduces slanted strips with suru crossed header and footer sections.

User flows comparison Snap and Charm stores consumer/publisher experience

The design team is exploring the user experiences of the Snap and Charm stores in order to align the consumer and publisher user journeys of the front pages. The same explorations are on going for the command lines of Snap/Juju and Snapcraft/Charm.

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Release UI drag’n’drop

The drag’n’drop functionality for Releases page was finalised this iteration. We updated the visual style to draggable elements to make them more clear and consistent and made it possible to promote whole channel or single revision using drag’n’drop.

This feature is already released and available to snapcraft.io users.

Default tracks

We started work on the UI for managing default tracks of the snaps. As the details of this functionality are still being discussed among the teams this work will continue in the next iteration.

Proof of concept for build

We started investigating technical aspect of moving build.snapcraft.io functionality into snapcraft.io web application for more consistent and consolidated experience for users.

This work is in the early stage and will continue over next iterations. 

Blog Posts

The post Design and Web team summary – 8 July 2019 appeared first on Ubuntu Blog.


Ubucon Europe 2019: 1st batch of talks approved!

$
0
0

Since May we have been receiving speaker applications from all over the world, with lots of super interesting topics. We are very excited about what’s being prepared to make Ubucon Europe in Sintra one of the best!

This week, we have approved the first batch of calls for you. To know more of what to expect in October visit our short summary in:

https://manage.ubucon.org/eu2019/sneak/

More talks will be approved in the upcoming weeks.

Do you have something to share? Hurry up, you still got time!

Submit yours now at:

https://manage.ubucon.org/eu2019/cfp

Thierry Carrez: Open source in 2019, Part 2/3

$
0
0

21 years in, the landscape around open source evolved a lot. In part 1 of this 3-part series, I explained all of open source benefits and why I think open source is necessary today. In this part, I'll argue that open source is not enough.

The relative victory of open source

All the benefits detailed in part 1 really explain why open source became so popular in the last 15 years. Open source is everywhere today. It has become the default way to build and publish software. You can find open source on every server, you can find open source on every phone... Even Microsoft, the company which basically invented proprietary software, is heavily adopting open source today, with great success. By all accounts, open source won.

But... has it, really ?

The server, and by extension the computing, networking, and storage infrastructure, are unquestionably dominated by open source. But the growing share of code running operations for this infrastructure software is almost always kept private. The glue code used to provide users access to this infrastructure (what is commonly described as "cloud computing") is more often than not a trade secret. And if you look to the other side, the desktop (or the user-side applications in general) are still overwhelmingly driven by proprietary software.

Even contemplating what are generally considered open source success stories, winning can leave a bitter taste in the mouth. For example, looking at two key tech successes of the last 10 years, Amazon Web Services and Android, they both are heavily relying on open source software. They are arguably a part of this success of open source picture I just painted. But if you go back to part 1 and look at all the user benefits I listed, the users of AWS and Android don’t really enjoy them all. As an AWS user, you don't have transparency: you can’t really look under the hood and understand how AWS runs things, or why the service behaves the way it does. As an Android user, you can’t really engage with Android upstream, contribute to the creation of the software and make sure it serves your needs better tomorrow.

So open source won and is ubiquitous... however in most cases, users are denied some of the key benefits of open source. And looking at what is called "open source" today, one can find lots of twisted production models. By "twisted", I mean models where some open source benefits go missing, like the ability to efficiently engage in the community.

For example, you find single-vendor open source, where the software is controlled by a single company doing development behind closed doors. You find open-core open source, where advanced features are reserved for a proprietary version and the open source software is used as a trial edition. You find open source code drops, where an organization just periodically dumps their code to open-wash it with an open source label. You find fire and forget open source, where people just publish once on GitHub with no intention of ever maintaining the code. How did we get here?

Control or community

What made open source so attractive to the software industry was the promise of the community. An engaged community that would help them write the software, build a more direct relationship that would transcend classic vendor links, and help you promote the software. The issue was, those companies still very much wanted to keep control: of the software, of the design, of the product roadmap, and of the revenue. And so, in reaction to the success of open source, the software industry evolved a way to produce open source software that would allow them to retain control.

But the fact is... you can’t really have control and community. The exclusive control by a specific party over the code is discouraging other contributors from participating. The external community is considered as free labor, and is not on a level playing field compared to contributors on the inside, who really decide the direction of the software. This is bound to create frustration. This does not make a sustainable community, and ultimately does not result in sustainable software.

The open-core model followed by some of those companies creates an additional layer of community tension. At first glance, keeping a set of advanced features for a proprietary edition of the software sounds like a smart business model. But what happens when a contributor proposes code that would make the "community edition" better ? Or when someone starts to question why a single party is capitalizing on the work of "the community"? In the best case, this leads to the death of the community, and in the worst case this leads to a fork... which makes this model particularly brittle.

By 2019, I think it became clearer to everyone that they have to choose between keeping control and growing a healthy community. However most companies chose to retain control, and abandon the idea of true community contribution. Their goal is to keep reaping the marketing gains of calling their software open source, of pretending to have all the benefits associated with the open source label, while applying a control recipe that is much closer to proprietary software than to the original freedoms and rights associated with free software and open source.

How open source is built impacts the benefits users get

So the issue with twisted production models like single-vendor or open-core is that you are missing some benefits, like availability, or sustainability, or self-service, or the ability to engage and influence the direction of the software. The software industry adapted to the success of open source: it adopted open source licenses but little else, stripping users of the benefits associated with open source while following the letter of the open source law.

How is that possible?

The issue is that free software and open source both addressed solely the angle of freedom and rights that users get with the end product, as conveyed through software licenses. They did not mandate how the software was to be built. They said nothing about who really controls the creation of the software. And how open source is built actually has a major impact on the benefits users get out of the software.

The sad reality is, in this century, most open source projects are actually closed one way or the other: their core development may be done behind closed doors, or their governance may be locked down to ensure permanent control by the main sponsor. Everyone produces open source software, but projects developed by a truly open community have become rare.

And yet, with truly open communities, we have an open source production model that guarantees all the benefits of free and open source software. It has a number of different names. I call it open collaboration: the model where a community of equals contributes to a commons on a level playing field, generally under an open governance and sometimes the asset lock of a neutral non-profit organization. No reserved seats, no elite group of developers doing design behind closed doors. Contribution is the only valid currency.

Open collaboration used to be the norm for free and open source software production. While it is more rare today, the success of recent open infrastructure communities like OpenStack or Kubernetes proves that this model is still viable today at very large scale, and can be business-friendly. This model guarantees all the open source benefits I listed in part 1, especially sustainability (not relying on a single vendor), and the ability for anyone to engage, influence the direction of the software, and make sure it addresses their future needs.

Open source is not enough

As much as I may regret it, the software industry is free to release their closely-developed software under an open source license. They have every right to call their software "open source", as long as they comply with the terms of an OSI-approved license. So if we want to promote good all-benefits-included open source against twisted some-benefits-withheld open source, F/OSS advocates will need to regroup, work together, reaffirm the open source definition and build additional standards on top of it, beyond "open source".

This will be the theme of the last part in this series, to be published next week. Thank you for reading so far!

Full Circle Magazine: Full Circle Weekly News #138

$
0
0

Correction
The Raspberry Pi 4B ships with two micro HDMI, not mini, as was stated in the previous episode.


Purism’s Security Key Will Generate Keys Directly on the Device, Made in the USA
https://news.softpedia.com/news/purism-s-security-key-will-generate-keys-directly-on-the-device-made-in-the-usa-526570.shtml

Mageia 7 Linux Distro Available for Download
https://betanews.com/2019/07/01/mageia-7-linux-seven-mageia7/

Debian 10 “Buster” Released
https://www.debian.org/News/2019/20190706

FreeDOS turns 25
https://liliputing.com/2019/07/freedos-turns-25-open-source-dos-compatible-operating-system.html

Linux Mint 20 Will Drop Support for 32-bit Installations

https://news.softpedia.com/news/linux-mint-20-and-future-releases-will-drop-support-for-32-bit-installations-526601.shtml

MintBox 3 Linux Mint-Powered Mini PC Announced as the Most Powerful MintBox Ever
https://news.softpedia.com/news/mintbox-3-linux-mint-powered-mini-pc-announced-as-the-most-powerful-mintbox-ever-526602.shtml

GNU Rush 2.0 Released
https://www.phoronix.com/scan.php?page=news_item&px=GNU-Rush-2.0-Released

Ubuntu 19.10 Wallpaper Competition Is Now Open for Submissions
https://news.softpedia.com/news/ubuntu-19-10-eoan-ermine-wallpaper-competition-is-now-open-for-submissions-526599.shtml

Linux Overtakes Windows Server As Most Used Operating System on Azure
https://www.onmsft.com/news/linux-overtakes-windows-server-as-most-used-operating-system-on-azure

Microsoft Asks To Join Private Linux Security Developer List

Credits:
Ubuntu “Complete” sound: Canonical
 
Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

Jono Bacon: Emily Musil Church on the Global Learning XPRIZE

$
0
0

Back in 2014, I worked at the XPRIZE Foundation, and we launched the Global Learning XPRIZE. This $15 million competition, funded by Elon Musk among others, challenged teams to build an Android tablet application that teaches kids how to read, write, and perform arithmetic, all without the aid of a teacher within 18 months. The competition showed the potential of how technology can help to bring education to 250million+ kids around the world.

Fast forward to 2019 and the prize was awarded to Team onebillion and Team KitKit School, with all entries showing incredible results in learning across literacy, maths, and beyond. What’s more, all the entries have been open sourced on GitHub.

Emily Musil Church is the director of the prize, and was involved at every step of the way from shaping the competition, to running the field trials in Tanzania, and more. She is one of the most incredible people I have ever worked with.

In this episode of Conversations With Bacon we get into the nature of the competition, the logistics of running field trials with thousands of Google Pixel tablets across hundreds of remote villages, how the teams competed and collaborated together, the broader impact of education beyond merely learning, the open sourcing of the entries, and much more. This is a really fascinating and inspiring conversation!

The post Emily Musil Church on the Global Learning XPRIZE appeared first on Jono Bacon.

Sebastien Bacher: Bolt 0.8 update

$
0
0

Christian recently released bolt 0.8, which includes IOMMU support. The Ubuntu security team seemed eager to see that new feature available so I took some time this week to do the update.

Since the new version also featured a new bolt-mock utility and installed tests availability. I used the opportunity that I was updating the package to add an autopkgtest based on the new bolt-tests binary, hopefully that will help us making sure our tb3 supports stays solid in the futur ;-)

The update is available in Debian Experimental and Ubuntu Eoan, enjoy!

Viewing all 17727 articles
Browse latest View live