Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Colin Watson: man-db 2.8.7

$
0
0

I’ve released man-db 2.8.7 (announcement, NEWS), and uploaded it to Debian unstable.

There are a few things of note that I wanted to talk about here. Firstly, I made some further improvements to the seccomp sandbox originally introduced in 2.8.0. I do still think it’s correct to try to confine subprocesses this way as a defence against malicious documents, but it’s also been a pretty rough ride for some users, especially those who use various kinds of VPNs or antivirus programs that install themselves using /etc/ld.so.preload and cause other programs to perform additional system calls. As well as a few specific tweaks, a recent discussion on LWN reminded me that it would be better to make seccomp return EPERM rather than raising SIGSYS, since that’s easier to handle gracefully: in particular, it fixes an odd corner case related to glibc’s nscd handling.

Secondly, there was a build failure on macOS that took a while to figure out, not least because I don’t have a macOS test system myself. In 2.8.6 I tried to make life easier for people on this platform with a CFLAGS tweak, but I made it a bit too general and accidentally took away configure’s ability to detect undefined symbols properly, which caused very confusing failures. More importantly, I hadn’t really thought through why this change was necessary and whether it was a good idea. man-db uses private shared libraries to keep its executable size down, and it passes -no-undefined to libtool to declare that those shared libraries have no undefined symbols after linking, which is necessary to build shared libraries on some platforms. But the CFLAGS tweak above directly contradicts this! So, instead of playing core wars with my own build system, I did some refactoring so that the assertion that man-db’s shared libraries have no undefined symbols after linking is actually true: this involved moving decompression code out of libman, and arranging for the code in libmandb to take the database path as a parameter rather than as a global variable (something I’ve meant to fix for ages anyway; 252d7cbc23, 036aa910ea, a97d977b0b). Lesson: don’t make build system changes you don’t quite understand.


Ubuntu Blog: Snaps help Xibo rekindle its relationship with Linux

$
0
0

Sometimes, relationships just don’t work out. At first, it seemed that Xibo and Linux were made for each other. Xibo had a popular open source digital signage and player system, while Linux brought a community of enthusiastic users. Dan Garner of Xibo remembers why they broke up in 2015: “Releasing our player on Linux was too heavy on development resources, we were a small team, and it was difficult to make deployment stable”.

So, Linux releases were shelved, much to the disappointment of users. Xibo’s software remained available as open source and as binaries. However, Linux users had to do the heavy lifting to install it and make it work. Hardcore fans often built their Xibo systems directly from the source code, creating a patchwork of different generations of the software in a universe outside Xibo’s mainstream activities.

Meanwhile, Xibo developed its Android player as a commercial offering and as Dan puts it, “joined the industry foray into system-on-chip with commercial webOS and Tizen platform versions”. The overriding goal continued of having a completely open source option permanently available. With Xibo’s CMS system already open source, Dan adds, “Xibo has always had an open source player and plans to always do so – there is a strong following for our open source products and users do a lot of cool stuff with them”. 

However, there was still the stumbling block of the difficulty in making the open source player releases stable as Linux packages, compared to Windows versions that were “stable out of the box”. The video and embedded web components of Linux releases of the Xibo player were especially problematical. Undeterred, the user community came back with suggestions about different frameworks that Xibo could use to resolve the problems, while users continued to use old Linux player versions in the absence of updates. 

The Xibo user base also continued to expand. While education was a key market, Xibo saw its solutions being used in many other sectors including retail, banking, and government. Dan estimates that currently, “installations range from 1 to around 2,000 units in a single CMS with well over 50,000 screens out there running Xibo, possibly even more behind closed doors”.  And even if Xibo prefers to focus on x86 architectures, leaving alternatives like ARM for later, Dan is aware that “Xibo’s open source players run on almost anything users can find”.

Then Dan’s colleague and Xibo co-founder Alex Harrington discovered snaps. As the Xibo team grew, Alex did some research into different ways of using snaps, the channels available, and snaps capabilities of auto-updating and API access. As Dan puts it, “since the Xibo app uses a lot of libraries, having these running in an isolated platform like snaps was attractive – in fact, it sounded like such a perfect fit for Linux version releases that we just launched into it”.

Now, things have changed. The Xibo team is bigger and Linux release packaging has become easier and more reliable, thanks to snaps. “With snaps we can manage the Xibo dependencies much better,” says Dan. “There are no more long installation guides, just one line and it’s done – a huge benefit to users”. There’s something else that Dan and the Xibo team find remarkable about snaps. It’s the silence. “There are no support questions any more about installation, it’s smooth and seamless”.

Snaps and Snapcraft work well in Xibo’s internal processes too. “We work with Docker and then push the results into Snapcraft. Starting to package our app as a snap was simple, taking about two days to learn about and apply Snapcraft for a quality result”. Dan hopes that making the Xibo player snap available via the Snap Store will also bring more order to the current “fractured” base of Xibo Linux player versions, by allowing users to automatically update their installations to the latest version. Currently, Xibo develops, tests and builds using Ubuntu, although additional distributions may be added in the future.

Has Xibo considered commercialising versions of its open source offering? “The short answer is no,” says Dan. Xibo is committed to keeping its open source software available for anyone to use, free of charge. Xibo has a paying cloud option and Dan thinks some users may be attracted by the “easier, one-click experience” of using the Linux player with the cloud hosted CMS, “but there is no plan to directly monetise the Linux player”. 

The Snap Store will be an extra channel for Xibo player software for Linux, rather than a replacement channel for the open source that Xibo continues to make available. Dan says, “the Snap Store makes distribution easier and I think that users are organically attracted by snaps in any case, even if there is great enthusiasm among developers”. What other advice would Dan give to developers weighing the pros and cons of snaps? “Go for it!”, he says, “And visit the Snapcraft Community forum to learn more about everything that’s going on in the world of snaps”.

Install Xibo as a snap here.

Santiago Zarate: When find can't find

$
0
0

If you happen to be using gnu find in the deadly combination with a directory that is a symlink (you just don’t know that yet), you will find face the hard truth that running:

find /path/to/directory -type f

Will return zero, nada, nichts, meiyou, which is annoying.

where is it!

this will make you question your life decisions, and your knowledge on tools that you use daily, only to find out that the directory is actually a symlink :).

So next time you find yourself using find and it returns nothing, but you are sure that your syntax is correct and get no errors, try adding the --fowllow or use the -L

find -L /path/to/directory/with/symlink -type f

This will do what you want :)

Here is it!

Ubuntu Blog: A guide to developing Android apps on Ubuntu

$
0
0
A guide to developing Android apps on Ubuntu

Android is the most popular mobile operating system and is continuing to grow its market share. IDC expects that Android will have 85.5% of the market by 2022, demonstrating that app development on Android will continue to be an in-demand skill.

For developers looking to build Android apps, Ubuntu is the ideal platform in conjunction with Android Studio – the official Android development environment. Ubuntu features a wide variety of software development tools including numerous programming language compilers, integrated development environments (IDEs) and toolchains to enable developers to target multiple hardware platforms.

Developers using Ubuntu will be enabled to write an Android app and deploy it to emulated and physical devices using standard tooling all from their desktop.

In this guide, you will learn:

  • Why Ubuntu Desktop is suited as a platform for Android developers building new apps
  • How to configure and install Android Studio as a snap to get started
  • A step by step guide on creating an Android app on Ubuntu targeting an array of devices

To download the whitepaper, complete the form below:

Jonathan Riddell: polkit-qt-1 0.113.0 Released

$
0
0

Some 5 years after the previous release KDE has made a new release of polkit-qt-1, versioned 0.113.0.

Polkit (formerly PolicyKit) is a component for controlling system-wide privileges in Unix-like operating systems. It provides an organized way for non-privileged processes to communicate with privileged ones.   Polkit has an authorization API intended to be used by privileged programs (“MECHANISMS”) offering service to unprivileged programs (“CLIENTS”).

Polkit Qt provides Qt bindings and UI.

This release was done ahead of additions to KIO to support Polkit.

SHA-256:
5b866a2954ef10ffb66156e2fe8ad0321b5528a8df2e4a91b02f5041ce5563a7
GPG fingerprint:
D81C0CB38EB725EF6691C385BB463350D6EF31EF

Notable changes since 0.112.0
———————————————————
– Add support for passing details to polkit
– Remove support for Qt4

https://download.kde.org/stable/polkit-qt-1/

Thanks to Heiko Becker for his work on this release.

Full changelog

  •  Bump version for release
  •  Don’t set version numbers as INT cache entries
  •  Move cmake_minimum_required to the top of CMakeLists.txt
  •  Remove support for Qt4
  •  Remove unneded documentation
  •  authority: add support for passing details to polkit
    https://phabricator.kde.org/D18845
  •  Fix typo in comments
  •  polkitqtlistener.cpp – pedantic
  •  Fix build with -DBUILD_TEST=TRUE
  •  Allow compilation with older polkit versions
  •  Fix compilation with Qt5.6
  •  Drop use of deprecated Qt functions REVIEW: 126747
  •  Add wrapper for polkit_system_bus_name_get_user_sync
  •  Fix QDBusArgument assertion
  • do not use global static systembus instance

 

Ubuntu Blog: Components vs. Plugins in ROS 2

$
0
0
Ubuntu + ROS 2

After our series of post about ROS 2 CLI tools (1, 2), we continue exploring the ROS 2 realm taking a look at ROS 2 components and more specifically, how they compare to plugins.

spoiler alert:

Long story short, components are plugins.

Short story long? Is that a thing?

Well plugins and components are indeed essentially the same thing. Down the road, both are built into respective shared libraries, neither have a main function, they both are loaded at runtime and used by a third party.

We’ll note here that while plugins come straight outta ROS 1, components on the other hand are the ROS 2 evolution of ROS 1 nodelets (what’s that?) after being exposed to a Fire Stone on a full moon. Same idea, different beasts.

Plugins vs. Components

So what are the actual differences? To put it simply, a component is a plugin which derives from a ROS 2 node.

This assertion is backed by the fact the both rely on the class_loaderpackage, a ROS-independent library for dynamic class introspection and loading from runtime libraries. Their respective internal plugin-related plumbing (factory, registration, library path finding, loading etc.) is managed underneath by class_loader. Both offer a macro-based helper for registration, and both macros resolve to class_loader‘s CLASS_LOADER_REGISTER_CLASS macro. Yeah they immediately resolve to it, I mean, they don’t even try to hide it. On the other hand, why would they?

For a traditional plugin, the base class can be anything, user defined or not. The only constraint is that the derived class has to be default constructible (therefore, by the law of C++, the base class too). In the case of components, the plugin class commonly derives from the rclcpp::Node class, but it is not required. Indeed, the requirements for a class to be exported as a component are,

  • Have a constructor that takes a single argument that is a rclcpp::NodeOptions instance.
  • Have a method of the signature:
    • rclcpp::node_interfaces::NodeBaseInterface::SharedPtr get_node_base_interface(void)

which includes rclcpp::Node of course, but also e.g. rclcpp_lifecycle::LifecycleNode. It means that a component can inherit from any class meeting those requirements, but it can also implement them itself and skip inheritance altogether.

That last point is an important difference between plugins and components.

Components are special plugins

Indeed, traditionally plugins must declare their base classes when registering to class_loader factory scheme. This allows the factory to instantiate the derived object to a smart-pointer of base type which in turn can be delivered to the user, the developer.

// Registering a plugin
PLUGINLIB_EXPORT_CLASS(BaseClass, DerivedClass)
//
...
// Instantiating a plugin
// We have access to all of BaseClass API
std::shared_ptr<BaseClass> base_ptr = plugin_loader.createSharedInstance("DerivedClass");

Components’ base class registration on the other hand was developed relying on void (smart) pointer-based type-erasure. Thus allowing for a simple wrapper class as a common base class to all components. This design implies that the developer is not expected to query the factory for a new instance by himself. What would you do with a void pointer anyway? Instead the factory serves intended agents such as the rclcpp_components::ComponentManager and rclcpp_components::ComponentContainer.

// Registering a component
RCLCPP_COMPONENTS_REGISTER_NODE(DerivedNode)
//
...
// NodeInstanceWrapper is a wrapper around both a
// std::shared_ptr<void> and a
// rclcpp::node_interfaces::NodeBaseInterface::SharedPtr
// Not much to do with those!
rclcpp_components::NodeInstanceWrapper component_wrapper = component_loader->create_node_instance(options);

In most cases, components inherit from rclcpp::Node as it is the easiest way to fulfill the above requirements. Therefore, we’ll assume that a component is a ROS 2 node, with everything it involves, possibly parameters, listeners/publishers, services, action, et tutti quanti.

We can therefore make an important distinction here: components are plugins with a ROS 2 interface. This draws an important line as when and where to use plugins or components.

Plugins or components, that is the question

The answer to this question obviously depends on your project. But first, let me assume that you do have a need for plugins and spare you asking

‘Do you really need to use plugins?’

(see what I did there).

As a general rule, I’d recommend for you to use components over plugins
whenever possible. The reason is simple, you, as a components developer, do not have to deal with the plugin aspect of things beside the registration macro. No plugin name as a ROS parameter, no plugin loader, no exception handling etc. Then from a user perspective it is clearer I believe to launch a specific node, say my_plugin_b_node, rather than launching an interface node given a plugin’s name as a ROS parameter my_node plugin_name:='PluginB'. It also spares the frustration of seeing the node crash or being stall because the plugin is not found or misspelled.

Note that this recommendation does not prevent you from enforcing some common API the same manner it would arise from using plugins. Your components can inherit from a base class of your making which enforce this API. The base class may also implement the ROS interface, abstracting it away, leaving to the derived class a simple virtual doMath(Arg) -like function to define.

But my stuff is so modular that I’m using several plugins

Can you possibly rethink the design? Can you break it down in several small nodes that communicate through ROS interfaces? I’d bet that some of those intermediate data could be useful somewhere else in your system too.

If not, or if you find yourself with a different use case which does require plugin, you may still declare your node class as a component! Components and plugins are not mutually exclusive. Moreover,

In ROS 2 [components is] the recommended way of writing your code.

Show me some code

While doing the legwork for this post, I wrote a small ROS 2 package which has no other purpose than being a goto example on how to write a component or a plugin in ROS 2. You may find it on github: demo_plugin_component. This example showcase the use of plugins in conjunction with components, look for one and get both. After compiling, one may run it in either way,

Like a plain node,

$ ros2 run demo_plugin_component talker_node __params:=demo_plugin_component/cfg/params.yaml

As a component in a component container,

ros2 run rclcpp_components component_container
ros2 component load /ComponentManager demo_plugin_component ros2_playground::TalkerNode -p writter_name:='ros2_playground::MessageWritterDerived'

As a component in a component container, from a launch file,

ros2 launch demo_plugin_component talker.launch.py

Do not forget to run a topic echo to monitor the node output, it is a talker after all,

ros2 topic echo /chatter

Sam Hewitt: How to Run a Usability Test

$
0
0

One of the most important steps of the design process is “usability testing”, it gives designers the chance to put themselves in other people’s shoes by gathering direct feedback from people in real time to determine how usable an interface may be. This is just as important for free and open source software development process as it is any other.

Though free software projects often lack sufficient resources for other more extensive testing methods, there are some basic techniques that can be done by non-experts with just a bit of planning and time—anyone can do this!

Free Software Usability

Perhaps notoriously, free software interfaces have long been unapproachable; how many times have you heard: “this software is great…once you figure it out.” The steep learning curve of many free software applications is not representative of how usable or useful it is. More often than not it’s indicative of free software’s relative complexity, and that can be attributed to the focus on baking features into a piece of software without regard for how usable they are.

A screenshot of the calibre e-book management app's poor UI

Free software developers are often making applications for themselves and their peers, and the steps in development where you’d figure out how easy it is for other people to use—testing—gets skipped. In other words, as the autho* the application you are of course familiar with how the user interface is laid out and how to access all the functionality, you wrote it. A new user would not be, and may need time or knowledge to discover the functionality, this is where usability testing can come in to help you figure out how easy your software is to use.

What is “Usability Testing”?

For those unfamiliar with the concept, usability testing is a set of methods in user-centric design meant to evaluate a product or application’s capacity to meet its intended purpose. Careful observation of people while they use your product, to see if it matches what it was intended for, is the foundation of usability testing.

The great thing is that you don’t need years of experience to run some basic usability tests, you need only sit down with a small group of people, get them to use your software, and listen and observe.

What Usability Testing is Not

Gathering people’s opinion (solicited or otherwise) on a product is not usability testing, that’s market research. Usability testing isn’t about querying people’s already formed thoughts on a product or design, it’s about determining if they understand a given function of a product or its purpose by having them use said product and gather feedback.

Usability is not subjective, it is concrete and measureable and therefore testable.

Preparing a Usability Test

To start, pick a series of tasks within the application that you want to test that you believe would be straightforward for the average person to complete. For example: “Set the desktop background” in a photos app, “Save a file with a new name” in a text editor, “Compose a new email” in an email client, etc. It is easiest to pick tasks that correspond to functions of your application that are (intended to be) evident in the user interface and not something more abstract. Remember: you are testing the user interface not the participant’s own ability to do a task.

You should also pick tasks that you would expect to take no more than a few minutes each, if participants fail to complete a task in a timely manner that is okay and is useful information.

Create Relatable Scenarios

To help would-be participants of your test, draft simple hypothetical scenarios or stories around these tasks which they can empathize with to make them more comfortable. It is very important in these scenarios that you do not use the same phrasing as present in the user interface or reference the interface as it would be too influential on the testers’ process. For instance, if you were testing whether an email client’s compose action was discoverable, you would not say:

Compose an email to your aunt using the new message button.

This gives too much away about the interface as it would prompt people to look for the button. The scenario should be more general and have aspects that everyone can relate to:

It’s your aunt’s birthday and you want to send her a well-wishes message. Please compose a new email wishing her a happy birthday.

These “relatable” aspects gives the participant something to latch onto and it makes the goal of the task clearer for them by allowing them to insert themselves into the scenario.

Finding Participants

Speaking of participants, you need at least five people for your test, after five there are diminishing returns as the more people you add, the less you learn as you’ll begin to see things repeat. This article goes into more detail, but to quote its summary:

Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford.

This is not to say that you stop after a single test with five individuals, it’s that repetitive tests with small groups allow you to uncover problems that you can address and retest efficiently, given limited resources.

Also, the more random the selection group is, the better the results of your test will be—“random” as if you grabbed passers-by the hallway or on the street. As a bonus, it’s also best to offer some sort of small gratuity for participating, to motivate people to sign up.

Warming Up Participants

It’s also important to not have the participants jump into a test cold. Give participants some background and context for the tests and brief them on what you are trying to accomplish. Make it absolutely clear that the goal is to test the interface, not them or their abilities; it is very important to stress to the participants that their completion of a task is not the purpose of the test but determining the usability of the product is. Inability to complete a task is a reflection of the design not of their abilities.

Preliminary Data Gathering

Before testing, gather important demographic information from your participants, things like age, gender (how they identify), etc. and gauge their level of familiarity with or knowledge of the product category, such as: “how familiar are you with Linux/GNOME/free software on a scale from 1-5?” All this will be helpful as you break down the test results for analysis to see trends or patterns across test results.

Running the Test

Present the scenarios for each task one at a time and separately as to not overload the participants. Encourage participants to give vocal feedback as they do the test, and to be as frank and critical as possible as to make the results more valuable, assuring them your feelings will not be hurt by doing so.

During the task you must but attentive and observe several things at once: the routes they take through your app, what they do or say during or about the process, their body language and the problems they encounter—this is where extensive note-taking comes in.

No Hints!

Do not interfere in the task at hand by giving hints or directly helping the participant. While the correct action may be obvious or apparent to you, the value is in learning what isn’t obvious to other people.

If participants ask for help it is best to respond with guiding questions; if a participant gets stuck, prompt them to continue with questions such as “what do you think you should do?” or “where do you think you should click?” but if they choose not finish or are unable to, that is okay.

Be Watchful

The vast majority of stumbling blocks are found by watching the body language of people during testing. Watch for signs of confusion or frustration—frowning, squinting, sighing, hunched shoulders, etc.—when a participant is testing your product and make note of it, but do not make assumptions about why they became frustrated or confused: ask them why.

It is perfectly alright to pause the test when you see signs of confusion or frustration and say:

I noticed you seemed confused/frustrated, care to tell me what was going through your mind when you were [the specific thing they were doing]?

It’s here where you will learn why someone got lost in your application and that insight is valuable.

Take Notes

For the love of GNU, pay close attention to the participants and take notes. Closely note how difficult a participant finds a task, what their body language is while they do the task, how long it takes them, and problems and criticisms participants have. Having participants think aloud or periodically asking them how they feel about aspects of the task, is extremely beneficial for your note-taking as well.

To supplement your later analysis, you may make use of screen and/or voice-recording during testing but only if your participants are comfortable with it and give informed consent. Do not rely on direct recording methods as they can often be distracting or disconcerting and you want people to be relaxed during testing so they can focus, and not be wary of the recording device.

Concluding the Test

When the tasks are all complete you can choose to debrief participants about the full purpose of the test and answer any outstanding questions they may have. If all goes well you will have some data that can be insightful to the development of your application and for addressing design problems, after further analysis.

Collating Results

Usability testing data is extremely useful to user experience and interaction designers as it can inform our decision-making over interface layouts, interaction models, etc. and help us solve problems that get uncovered.

Regardless of whether the testing and research is not conducted ourselves, it’s important that the data gathered is clearly presented. Graphs, charts and spreadsheets are incredibly useful in your write-up for communicating the break down of test results.

Heat Maps

It helps to visualize issues with tasks in a heat map, which is an illustration that accounts for the perceived difficulty of a given task for each participant by colour-coding them in a table.

Example Heat Map

The above is a non-specific example that illustrates how the data can be represented: green for successful completion of the task, yellow for moderate difficulty, red for a lot of difficulty, and black for an incomplete. From this heat map, we can immediately see patterns that we can address by looking deeper in the results; we can see how “Task 1” and Task 6” presented a lot of difficulty for most of the participants, and that requires further investigation.

More Usable Free Software

Conducting usability testing on free software shouldn’t be an afterthought of the development process but rather it should be a deeply integrated component. However, the reality is that the resources of free software projects (including large ones like GNOME) are quite limited, so one of my goals with this post is to empower you to do more usability testing on your own—you don’t have to be an expert—and to help out and contribute to larger software projects to make up for the limits on resources.

Usability Testing GNOME

Since I work on the design of GNOME, I would be more than happy to help you facilitate usability testing for GNOME applications and software. So do not hesitate to reach out if you would like me to review your plans for usability testing or to share results of any testing that you do.


Further Reading

If you’re interested in more resources or information about usability, I can recommend some additional reading:

Ubuntu Blog: Multi-tenancy in MAAS

$
0
0

In this blog post, we are going to introduce the concept of multi-tenancy in MAAS. This allows operators to have different groups of users own a group of resources (machines) without ever even knowing about other groups of users enabling enhanced machine utilisation.

A common use case for medium and large-scale environments is to provide a different set of machines for different users or groups of users. MAAS has historically approached this by allowing users to pre-reserve machines (allocate) for later use. However, as of MAAS 2.4 we introduced the concept of resource pools.

Resource pools and role-based access control

Resource pools are a new way to organise your physical and virtual resources. A resource pool is effectively a bucket in which one or more machines can be placed. A machine can only be in one resource pool.

Figure 1. MAAS resource pools.

But now that you have organised your machines, how do you go about assigning users or groups to the different resource pools and preventing users from seeing resources that are assigned to someone else? Well, this is done with RBAC.

Role-based access control (RBAC) is supported in MAAS as an external micro-service that provides this functionality. The Canonical RBAC service allows administrators to select which users or groups of users can have access to a given resource pool, and the role that they can play within the resource pool itself.

Figure 2. RBAC service.

RBAC provides four roles that give the flexibility in multi-tenant environments:

  • Administrator – Maps to the current administrative user in MAAS.
  • Operator – Provides administrative permissions in the context of a resource pool.
  • User – Maps to the current non-administrative user of MAAS.
  • Auditor – Can only read information.

As MAAS can organise its physical and virtual resources in resource pools and prevent access to those resources via RBAC, what about authentication?  Where do users or user groups come from?

To provide authentication, MAAS and RBAC integrates with Candid, the Canonical identity manager service. Candid is a centralised authentication service that integrates with LDAP, Active Directory, SSO, and others. For MAAS, Candid provides LDAP authentication which is the source of users or user groups.

This allows administrators to continue to use their current authentication systems and seamlessly integrate them with MAAS and RBAC.

So, multi-tenancy?

As we have learnt, MAAS achieves multi-tenancy by making use of resource pools, RBAC and LDAP (with Candid). With this, administrators in MAAS can ensure certain users or groups within their organisation have access to only one or multiple resource pools.

But, how is this really multi-tenancy? It is because users (or user groups) will only be able to access the resources within the resource pools they have access to; they won’t be able to see that other resource pool exists. This provides complete separation making MAAS very flexible for large-scale environments or SMBs.  

For more information please contact us.



Simos Xenitellis: Using the LXD Kali container image

$
0
0

If you have a look at the list of container images for LXD (repository images:), you will notice the recent addition of the Kali container images. These were added by Re4son (@kali.org).

But Kali is a security distribution, does it make sense to create system containers with Kali? LXD offers system containers, which are similar to virtual machines, but not quite like virtual machines.

In this post we see how to use a Kali system container with the WiFi security tools by attaching a network card to the container.

Prerequisites

We are using a USB WiFi adapter. Any network card can work here, but it’s a USB WiFi adapter for today.

When you connect the adapter to your computer, it will be autoconfigured by NetworkManager. We do not want that. We want the network adapter to be unmanaged, not managed by NetworkManager on the host.

We have connected the network adapter. Let’s see it and identify the MAC address. The MAC address is 00:0f:00:7c:e0:55.

$ iwconfigwlx000f007ce055  IEEE 802.11  ESSID:off/any  
           Mode:Managed  Access Point: Not-Associated   Tx-Power=0 dBm   
           Retry short limit:7   RTS thr:off   Fragment thr:off
           Power Management:on
$ ifconfig wlx000f007ce055
wlx000f007ce055: flags=4098  mtu 1500
        ether 00:0f:00:7c:e0:55  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

We edit and append the following lines in /etc/NetworkManager/NetworkManager.conf.

[keyfile] 
unmanaged-devices=mac:00:0f:00:7c:e0:55

Finally, we restart NetworkManager in order to consider this network device as an unmanaged (by itself) device.

$ sudo systemctl reload NetworkManager

Finally, we check whether RFKILL is blocking the wireless adapter. If it is, we unblock it.

$ rfkill list
0: : Wireless LAN
    Soft blocked: yes
    Hard blocked: no
$ rfkill unblock all
$ rfkill list
0: : Wireless LAN
    Soft blocked: no
    Hard blocked: no

We are now good to go. The network device is unmanaged, and the Kali container can now start using it.

Creating a Kali system container

Let’s see the list of available Kali container images. The command is lxc image list and we specify the repository images:. The kali means that we are searching for all images that contain kali anywhere in the name. There is a kali container image, that defaults (in my case) to the x86_64 image (same with the host). There exist container images for i386, armel and armhf, which apparently correspond to all supported architectures of the Kali project.

$ lxc image list images:kali
+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

|           ALIAS           | FINGERPRINT  | PUBLIC |             DESCRIPTION             |  ARCH   |  SIZE   |          UPLOAD DATE          |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali (5 more)             | 00302e62e5f2 | yes    | Kali current amd64 (20190813_17:14) | x86_64  | 87.70MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/arm64 (2 more)       | 54747ab801e8 | yes    | Kali current arm64 (20190813_17:14) | aarch64 | 84.52MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/armel (2 more)       | 351adfe2cbf7 | yes    | Kali current armel (20190813_17:34) | armv7l  | 82.22MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/armhf (2 more)       | 0281fd185db7 | yes    | Kali current armhf (20190813_17:14) | armv7l  | 82.73MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/cloud (3 more)       | 2e6c1ee1604a | yes    | Kali current amd64 (20190813_17:14) | x86_64  | 87.70MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/cloud/arm64 (1 more) | 2353897a13d5 | yes    | Kali current arm64 (20190813_17:14) | aarch64 | 84.52MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/cloud/armel (1 more) | 16ce426ef8ca | yes    | Kali current armel (20190813_17:14) | armv7l  | 82.23MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/cloud/armhf (1 more) | 61cdf9b01f18 | yes    | Kali current armhf (20190813_17:14) | armv7l  | 82.73MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/cloud/i386 (1 more)  | ff564b63033a | yes    | Kali current i386 (20190813_17:14)  | i686    | 88.65MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

| kali/i386 (2 more)        | 7de5fdd5ab55 | yes    | Kali current i386 (20190813_17:55)  | i686    | 88.65MB | Aug 13, 2019 at 12:00am (UTC) |

+---------------------------+--------------+--------+-------------------------------------+---------+---------+-------------------------------+

There is a kali and there is a kali/cloud. The container images have almost the same size, but different checksums. The cloud container images just have enabled support for cloud-init. This means that if you have the need to pass initial configuration to the container using cloud-init, then use the cloud container images.

We launch a container called mykali from the images:kalicontainer image.

$ lxc launch images:kali mykali

Let’s have a look at it the newly created container. It got a private IP address, in my case 10.10.10.20. We will use this later.

$ lxc list mykali
 +--------+---------+--------------------+-----------------------------------------------+------------+-----------+
 |  NAME  |  STATE  |        IPV4        |                     IPV6                      |    TYPE    | SNAPSHOTS |
 +--------+---------+--------------------+-----------------------------------------------+------------+-----------+
 | mykali | RUNNING | 10.10.10.20 (eth0) | fd42:3964:5d29:887a:216:3eff:fe39:56a5 (eth0) | PERSISTENT | 0         |
 +--------+---------+--------------------+-----------------------------------------------+------------+-----------+

Using the Kali LXD container

Let’s get a shell into kali.

$ lxc exec mykali -- /bin/bash
root@mykali:~# 

We are in! Now what? Let’s refresh the package list. And then try something related to WiFi. There is no iw command, therefore we install the iw package. Still, there is no Wifi card because the default settings in a LXD container does not include a WiFi interface. We have to add such an interface.

root@mykali:~# apt update
 Hit:1 https://kali.download/kali/ kali-rolling InRelease
 Reading package lists… Done
 Building dependency tree       
 Reading state information… Done
 All packages are up to date.
 root@mykali:~# iw
 bash: iw: command not found
 root@mykali:~# apt install iw
...
root@mykali:~# iw
root@mykali:~# logout
$ 

Adding a WiFi adapter to the Kali LXD container

LXD has the facility to make a network interface disappear from the host and make it appear in a LXD container. The LXD container has exclusive use if the network adapter. And all this is hot-pluggable (with a caveat). Here is again the USB WiFi adapter. It has the interface name wlx000f007ce055 on the host.

$ lsusb
 ...
 Bus 003 Device 006: ID 148f:7601 Ralink Technology, Corp. MT7601U Wireless Adapter
...
$ iw dev
 phy#1
     Interface wlx000f007ce055
         ifindex 40
         wdev 0x100000001
         addr 00:0f:00:7c:e0:55
         type managed
         txpower 0.00 dBm

Let’s move this network interface into the Kali LXD container. We add to the container mykali a device called wifi, which is a nic LXD device. The nictype is physical, with the interface name wlx000f007ce055 on the host, and in the container it will be known as wlan0.

$ lxc config device add mykali wifi nic nictype=physical parent=wlx000f007ce055 name=wlan0
Device wifi added to mykali

Did it work? It sure did!

$ lxc exec mykali -- /bin/bash
root@mykali:~# iw dev
 phy#0
     Interface wlan0
         ifindex 40
         wdev 0x100000001
         addr 00:0f:00:54:e9:aa
         type managed
         txpower 0.00 dBm
 root@mykali:~# 

Let’s put the interface in monitor mode. We use phy0 because the command above says phy#0. The new interface is called mon0. Then, we UP the interface and it is ready to use.

root@mykali:~# iw phy phy0 interface add mon0 type monitor
root@mykali:~# ip link set mon0 up

Now the interface mon0 is available, and we can use it with any WiFi network security tools in Kali.

Running Aircrack-ng in a LXD Kali system container

We install aircrack-ng in Kali and try out the tool.

root@mykali:~# apt install -y aircrack-ng

The network interface is already prepared in MONITOR mode, therefore we can straight away run commands like the following.

root@mykali:~# airodump-ng mon0

Running Kismet in a LXD Kali system container

We install Kismet in Kali and try out the tool.

root@mykali:~# apt install -y kismet

Then, we run kismet.

root@mykali:~# kismet
...
INFO: Starting Kismet web server…
INFO: Started http server on port 2501

We can now open our Web browser on the host and visit http://10.10.10.20:2501/ to continue with Kismet from the Web interface. Substitute accordingly the private IP address of your mykali container.

We set a username and password, and then logged into the Web interface of Kismet. We went into the Data Sources setting (top left of page), and enabled the Source mon0.

Enabling the data source mon0.

As soon as we enabled the data source, we are able to use Kismet from the Web interface.

Enabling CUDA in a LXD Kali system container

Here is how to enable GPU support in the container. We enable the nvidia.runtime, which means that the GPU libraries are loaded from the NVidia-supplied runtime, and matches our host’s driver version.

$ lxc launch images:kali mykali 
Creating mykali
Starting mykali
$ lxc config set mykali nvidia.runtime true
$ lxc config device add mykali mygpu gpu
Device mygpu added to mykali
$ lxc restart mykali
$ lxc exec mykali -- nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
...

Future steps

In this post, we introduced the LXD Kali container image and showed how to use it with a WiFi adapter. We showed how to use with the aircrack-ng suite and with kismet. You can also use with other tools, such as wifite which work better with CUDA support.

It is possible to run GUI tools in the Kali system container, though this is not shown in this post.

You can also attach any other network interface into the container. For example, you can put the Kali system container on a VPS and attach to it a dedicated secondary network interface. Or, use the VPS from the emergency console and attach the main network interface to the Kali container.

You can also expose the Kali container on the network using either a bridge or macvlan, and it is as if it was a separate and independent computer.

Ubuntu Blog: Canonical joins the ROS 2 Technical Steering Committee

$
0
0

We at Canonical care deeply about robotics. We firmly believe that robots based on Linux are cheaper to develop, more flexible, more secure, and faster to market. One of the contributing factors to this being the case is the Robot Operating System (ROS). ROS is by far the most popular middleware for creating Linux-powered robots. It provides all sorts of open source tools and libraries and pre-made components that solve common problems encountered during robot development. This allows roboticists to avoid needing to reinvent the wheel and instead focus on what really makes their robot unique. Of course, another reason we care about ROS is that most of the ROS community use Ubuntu. We love our users, and we want to make sure the experience they have on Ubuntu is consistently stellar!

We also care deeply about security, and that permeates everything we do. We’ve all seen how the IoT wave has been going in this regard: badly. IoT devices are low-margin, and no one has any incentive to keep them up to date or ensure that they’re secure in the first place. Manufacturers want to drive costs down, and users don’t consider the devices computers and don’t give a second thought to connecting them to the internet. It’s an unfortunate set of circumstances.

We think that the best way out of this situation is to make security and maintenance so easy that it becomes the obvious choice. If it was suddenly easier and cheaper for device manufacturers to create secure devices that can be automatically updated, why wouldn’t they do it? That’s the premise behind snaps and Ubuntu Core: by making complex topics like security and updates transparent and straightforward, we can make the entire ecosystem better for everyone.

Now, we aren’t quite that naive. A secure and robust operating system and packaging and update process are only pieces of a puzzle. If the software running on the devices has security holes, the picture is incomplete. Which brings us back to robotics and ROS.

Robotics is a realm of experimentation: evaluating different hardware and software, pushing ahead with what works, and culling what doesn’t. ROS 1, the version used by most production robots, was designed to be flexible to enable this experimentation. The flexibility of allowing existing components to be easily combined with new ones or swapped with others was valued above all else, at the cost of security.

We haven’t quite seen a wave of robots like we’ve seen with IoT, but we’re pretty sure it’s coming, and we want to make sure it doesn’t suffer the same fate. Just like IoT, the best way to make sure a production robot is secure is to make security as easy and transparent as possible. This is why we’re so interested in ROS 2.

ROS 2 is undergoing active development and is being built with the flexibility of ROS 1 while also supporting the technology necessary to secure it at its very core. A while ago, we joined the effort of fleshing out the ROS 2 security story; we’ve been helping with the design as well as software implementation. In order to effectively coordinate this work as well as show our commitment to it, we’re pleased to announce that we have become a member of the ROS 2 Technical Steering Committee.

We’re genuinely excited to be a part of such an open source powerhouse, and look forward to bringing our security expertise to bear as ROS 2 matures. With the right push during its development, we’re convinced ROS 2 will develop into an ecosystem where security comes naturally.

Ubucon Europe 2019: Our first gold sponsor – ANSOL!

$
0
0

Our first gold sponsor of this event is ANSOL (Associação Nacional para o Software Livre), the Portuguese national association for free and open source software.

ANSOL was officially founded in 2002, with the goal in mind to promote, share, develop and research free software and its social, political, philosophical, cultural, technical and scientific impacts in society. They work closely with policy makers, companies and other free software promoters to ensure more people can know about free and open source software.

Thanks to them, we have received significant support to sustain our event and our journey to give you one of the best open source experiences in Sintra.

Want to jump onboard as well?
Visit our Call for Sponsor post for more information.

Ubuntu Podcast from the UK LoCo: S12E21 – Rebelstar Raiders

$
0
0

This week we’ve been using Unity and learning about code of conduct incident response. We bring you a bumper crop of news and events from the Ubuntu community plus we round up some of our favourite stories from the tech world.

It’s Season 12 Episode 21 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Ubuntu Blog: A technical comparison between snaps and debs

$
0
0

How are snaps different from debs? This is a common question that comes up in technical discussions in the Linux community, especially among developers and users who have just embarked on their snap journey and are interested in learning more details. Indeed, the introduction of self-contained application format likes snaps has created a paradigm shift in how Linux applications are built, distributed and consumed. In this article, we’d like to give you an overview of some of the main differences between the two file formats, and how they can best suit your needs.

Debian format at a glance

Debian packages (deb in short) are packages used on the Debian distribution and its derivatives (including Ubuntu), containing the executable files, libraries, and other assets associated with a particular program. Each package is an ar archive and typically carries a deb file extension (.deb). Inside the ar archive, the package contains three components:

  • control-binary– a file that declares the Debian format number.
  • control.tar– a tar-based archive that contains the package control information and meta data, including pre- and post-install scripts, configuration files, list of shared library dependencies, and so on.
  • data.tar– a tar-based archive that contains the program binaries and libraries, which will be installed in the Linux system.

The management of Debian packages is done on multiple levels. The low-level management is done through dpkg utility. Most Linux distributions ship with a separate, higher-level utility that offers a more user-friendly syntax. Often, Debian-based systems use apt, but other implementations are possible.

Some distributions also offer a graphical frontend, in the form of a software center. In many cases, the use of the software center is closely tied to the distribution and the use of the desktop environment, creating a tightly coupled stack that cannot easily be broken into separate pieces. For instance, Ubuntu uses Ubuntu Software Center on top of the GNOME desktop environment, whereas Kubuntu uses KDE Discover on top of Plasma desktop, although both use apt under the hood (there are additional mechanisms involved, like PackageKit, but the finer details are beyond the scope of this article).

Snap format at a glance

Snaps are self-contained application packages designed to run on any system that supports them. Practically, this translates into 41 systemd-enabled distributions at the moment. Each snap is a compressed SquashFS package (bearing a .snap extension), containing all the assets required by an application to run independently, including binaries, libraries, icons, etc. The actual data structure of each snap will vary, depending on how it was built, but it will usually resemble the standard Linux filesystem structure. We will discuss the full details of the snap architecture in a separate article.

unsquashfs testsnap_1.0_amd64.snap
Parallel unsquashfs: Using 8 processors
4 inodes (138 blocks) to write
[=============================|] 138/138 100%
created 4 files
created 6 directories
created 0 symlinks
created 0 devices
created 0 fifos

cd squashfs-root/

ls-ltra
total 24
drwxr-xr-x 5 igor igor 4096 Dec 17  2018 ./
drwxrwxr-x 3 igor igor 4096 Aug 27 13:40 ../
drwxr-xr-x 2 igor igor 4096 Dec 17  2018 bin/
-rwxr-xr-x 1 igor igor   38 Dec 17 2018 command-testsnap.wrapper*
drwxr-xr-x 3 igor igor 4096 Dec 17  2018 meta/
drwxr-xr-x 3 igor igor 4096 Dec 17  2018 snap/

Snaps are managed by snap, the command-line userspace utility of the snapd service. With snap, users can query the Snap Store, install and refresh (update) snaps, remove snaps, and perform many other tasks, like manage application snapshots, set command aliases, enable or disable services, etc.

Practical differences – installation & use

The best way to examine the two formats is to go through a typical setup of an application in a Linux system, from a search done by the user to the installation and actual usage of the software.

Debian package installation and use

In a Debian-based system, a user will search for an application using the high-level command-line package manager (like apt) or using a frontend package manager like Synaptic, KDE Discover, Ubuntu Software Center, or others. Usually, the search results will show any entry that contains the search string, and this can include the desired asset but also shared libraries and development packages that the user may not necessarily require or interact with directly. For instance, searching for vlc (the media player) will return something like:

apt-cache search vlc
browser-plugin-vlc - multimedia plugin for web browsers based on VLC
cubemap - scalable video reflector, designed to be used with VLC
dvblast - Simple and powerful dvb-streaming application
fp-units-multimedia - Free Pascal - multimedia units dependency package
fp-units-multimedia-3.0.4 - Free Pascal - multimedia units
freeplayer - wrapper around vlc for French ADSL FreeBox
freetuxtv - Internet television and radio player

python3-pafy - Download videos and retrieve metadata from YouTube
smtube - YouTube videos browser
vlc - multimedia player and streamer
vlc-bin - binaries from VLC
vlc-data - common data for VLC
vlc-l10n - translations for VLC

In a GUI package manager, the results will be different – usually a shorter list, with fewer entries returned, often more accurately representing what the user expects. This can be a problem, as users with different levels of skills – and methods – may not necessarily achieve the same end goal, based on the same starting parameters, i.e. a search for an application.

The installation will include the main requested program but also any dependencies that the software needs. A user may only want to install vlc, but they will also see (on the command line) that additional assets are going to be set up, usually a number of shared library dependencies, each provided as a separate Debian package. For instance:

sudo apt-get install kmahjongg
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
kdegames-mahjongg-data-kf5 libkf5kmahjongglib-data libkf5kmahjongglib5
The following NEW packages will be installed:
kdegames-mahjongg-data-kf5 kmahjongg libkf5kmahjongglib-data libkf5kmahjongglib5
0 upgraded, 4 newly installed, 0 to remove and 31 not upgraded.

In the example above, the user’s request to install the Kmahjongg game will install four deb packages. In some cases, there could be dozens of library dependencies, and some of these will be common across many applications (like audio and video codecs).

Debian packages support GPG signature verification, but this is typically not used – instead, the integrity and verification of archives is done on a repository level. In other words, if you trust a repository, you inherently trust all its contents. From a security perspective, this could potentially be a problem, as users can manually add repositories to their system (like PPA), or manually install Deb packages obtained from non-repository sources.

If the installation is interrupted, the system will often be left in an inconsistent state that will require repairing (tools like apt have mechanisms to rebuild their indices) before any additional package management can be completed.

The installation of software also reveals two additional constraints:

  • There can only be one version of a particular software package installed.
  • Since shared libraries can be installed (or removed) as part of any Debian package installation, it is possible that a separate package setup will introduce a library change that could cause cause a regression or breakage in an unrelated application.

Snap installation and use

With snaps, things are similar – and yet different. A command-line search using snap and the results displayed in the Snap Store will also include all applications matching the relevant string. However, the results will be identical. Moreover, each result will be a separate snap that contains all the necessary assets to run independently.

snap find vlc
Name            Version                 Publisher  Notes Summary
vlc             3.0.7                   videolan✓  -     The ultimate media player
dav1d           0.2.0-1-ge29cb9a        videolan✓  -     AV1 decoder from VideoLAN
peerflix        v0.39.0+git1.df28e20    pmagill    -      Streaming torrent client for Node.js
mjpg-streamer   2.0         ogra       -      UVC webcam streaming tool
audio-recorder  3.0.5+rev1432+pkg-7b07  brlin     -      A free audio-recorder for Linux (EXTREMELY BUGGY)

Snap installations are also different from debs. Since snaps are fully self-contained applications, during the installation, the snap package (SquashFS filesystem archive) is decompressed and mounted as a read-only loopback device, with a separate writable private area created in the user’s home directory. Because snaps contains all the elements required to run an application, their disk footprint is typically larger than an equivalent Deb package. This is partially mitigated by having snaps compressed, and in some cases they might actually have a smaller size on the disk.

During the installation, a security profile will be created for the snap, which will determine what the snap can or cannot do once run. By default, snaps cannot access other snaps, or ever the underlying system. Specific overrides are required, which we will touch upon shortly. Furthermore, the isolated manner in which snaps are configured means that once the user removes a snap, all the assets are completely removed from the system. 

Snaps are cryptographically signed. Users can install snaps that originate outside the Snap Store by providing an explicit, manual override flag. This is common during development, allowing developers to test their snaps before uploading them to the store.

Channels & parallel installs

Snaps come with several additional features. The Snap Store supports channels, allowing developers to publish multiple versions of their software in different channels. This gives users the flexibility to switch between channels, e.g. edge, beta or stable, to test and try features available in different versions of the snap.

Moreover, they can also install multiple versions of the same snap in parallel. Since each snap lives in isolation, this gives users the freedom to experiment with their software without the fear or breakage or data loss.

Application updates

Most Linux distributions have a semi-automatic update mechanisms. Users can configure their systems to check for updates periodically, in order to keep their systems patched, but they also have the option to complete disable this feature (which can sometimes lead to a gap in the security coverage). This procedure comes with the same shortcomings as the individual package setup, in that an interruption may harm the system’s consistency, and updates could introduce regressions, usually in shared libraries, that break applications.

Snap updates are automatic, transactional and atomic, meaning if an update fails, the existing version of the snap will continue running (the functionality will not be impaired), the buggy update will be deferred, and the application will only be updated once the developer releases a new, improved update that is verified to work well. Of course, there are no silver bullets. There could be functional regressions in the software itself, which will not cause an update to fail.

Runtime security

Earlier, we briefly touched on the security isolation mechanisms. With deb packages, security is enforced on the system level, and may differ from one distribution to another. In some cases, Web-facing applications may be partially restricted, primarily to protect them from exploits. However, in most cases, software installed as Debian packages will have unrestricted access to system resources. Generally, an application will be able to write to disk, use the audio or video devices, or connect to the network.

On the other hand, snaps are designed to be isolated from the system, and this is done through a granular mechanism of security policies that prevent snaps from accessing the underlying system in an unchecked manner. Multiple confinement levels are possible, and in the strict level, snaps have no access to any resource, including home directory, network or display. It is possible to allow per-resource access using interfaces.

During the snap creation, developers can declare the use of one or more interfaces, which can then provide the necessary functionality to their applications – like audio, USB or perhaps hardware acceleration. We’ve touched on this concept in the Introduction to snapcraft tutorial series, and you may want to read it to get a deeper understanding of how you can use this during software development.

The isolation can have a negative side – in that application startup can be slower than a contemporary Debian package. The setup of the snaps requires several steps that do not occur in the traditional configuration with debs. This is something we take very seriously, and have made significant progress in making snaps faster to launch.

Summary of differences

The table below contains the major points covered in the article:

PackageDebianSnap
FormatAr archiveSquashFS archive
Signature verificationY (often not used)Y
Package managerdpkg (low-level)
Different higher-level  managers available
snap
Front-endManySnap Store
InstallationFiles copied to /Snap uncompressed and mounted as loopback device
DependenciesSharedInside each snap or content snaps
Automatic updatesMY
Transactional updatesNY
Multiple installs in parallelNY
Multiple versionsNY
Security confinementLimitedY
Disk footprintSmallerLarger
Application startup timeDefaultTypically longer

Conclusion

Most people don’t really care about the underlying mechanics of software management, but sometimes, it can be useful to understand the differences. Traditional Linux software packaging and distribution is designed for a very small, compact target footprint, which has been of great value in the times of expensive disk storage, or when deploying to devices with limited resources. However, along the way, this method can sometimes manifest in breakages during installation and updates.

That does not mean you should abandon your distro software right away! Debian packages are a perfectly valid way of consuming software, and in most cases, they will be a reliable, trusted method. Snaps complement and enhance the traditional ways with a more robust and granular approach, allowing for better control and separation, with reliability and uninterrupted service as primary drivers. This is of particular value in IoT deployments, but desktop users can and will also enjoy the benefits of confinement and isolation.

In this article, we did not touch on the development side at all. The use of different packaging formats and their associated toolchains also has a significant impact on how software is created and distributed. For the most part, this is hidden from the end user, but it is critical to software developers. In future articles, we will address this aspect of snaps vs. debs, too, and also examine how snaps are different from other self-contained application formats available on the market, like Flatpak and AppImage. For the time being, if you have any comments or questions, please join our forum for a discussion.

Photo by Anthony Brolin on Unsplash.

Ubuntu Blog: LXD in 4 Easy Steps

$
0
0


I needed to install a clean instance of Bionic to test some code, but I did not want to use a full virtual machine as I was in a hurry.  To do this, I used LXD to quickly deploy new Bionic and Xenial instances in minutes. 

If you are not familiar with LXD, it is a next generation container management system that is more like a VM than a traditional container. If you find yourself in a similar spot, here are the 3 commands to get an instance of Linux running and fourth to get you logged in. 

TL;DR 

snap install lxd

lxd init

(accept the default/yes to everything)

lxc launch ubuntu:18.04 bionic

lxc exec bionic -- /bin/bash

For the more inquisitive reader, first you install LXD as a snap.

snap install lxd

Then you set up LXD using lxd init and select ‘yes’ to the defaults. 

lxd init

Next, install your system using lxc launch <the image> <nickname> . Below we install Ubuntu 18.04 and give it the nickname ‘bionic’. 

lxc launch ubuntu:18.04 bionic

To connect to the system, use lxc exec. 

lxc exec bionic -- /bin/bash

Now verify your guest OS. 

I specifically wanted to compare something on Xenial and Bionic, so I ran the same commands except I substituted 16.04 for 18.04. 

Create another LXD image of Xenial and execute bash to connect. 

That is your first step in to a larger world. 

What’s Next ? 

One task I do fairly often as a security engineer is set up a web service and attac^H^H to test it for vulnerabilities. With the LXD instances you have created so far they are only available on the host they are running on.  I usually test from a remote host – to do that with LXD you configure a LXD proxy. In the following example, we’ll set up a proxy to allow web traffic into our LXD container. 

lxc exec bionic bash

apt install lighttpd

You are going to want to make sure you are connecting to the right VM, create a page unique to your instance by editing /var/www/html/index.html 

On the host, not the container, you will run lxc config to add a proxy. 


lxc config device add bionic web proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80

Now in your container, start your lighttpd service. 

lxc exec bionic bash`

service lighttpd start

Browse from your workstation to the IP of your lxd host. 

With that you know have a very lightweight virtual container environment capable of performing any number of tasks. 

Note For Ubuntu Server Users

These examples were all performed on an Ubuntu 19.04 Desktop, on Ubuntu Server LXD setup is even simpler. For those on Ubuntu 16.04 and 18.04 LXD is often pre-installed as a deb, if you want to switch to the snap run “snap install lxd” followed by “lxd.migrate”. On Ubuntu Server 18.10 and newer LXD is installed as a snap, simply run “lxd init” to begin using LXD.

Simos Xenitellis: Cloud-init support in LXD container images

$
0
0

cloud-init is a tool to help you customize cloud images. When you launch a cloud image, you can provide to it with your cloud-init instructions, and the cloud image will execute them. In that way, you can start with a generic cloud image, and as soon as it booted up, it will be configured to your liking.

In LXD, there are two main repositories of container images,

  1. the «ubuntu:» remote, repository with Ubuntu container images
  2. the «images:» remote, repository with all container images.

Until recently, only container images in the «ubuntu:» remote had support for cloud-init.

Now, container images in the «images:» remote have both a traditional version, and a cloud-init version.

Let’s have a look. We search for the Debian 10 container images. The format of the name of the non-cloud-init containers is debian/10. The cloud-init images have cloud appended to the name, for example, debian/10/cloud. These are the names for the default architecture, and in my case my host runs amd64. You will notice the rest of the supported architectures; these do not run (at least not out of the box) on your host because LXD’s system containers are not virtual machines (no hardware virtualization).

$ lxc image list images:debian/10

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

|              ALIAS               | FINGERPRINT  | PUBLIC |              DESCRIPTION               |  ARCH   |   SIZE   |          UPLOAD DATE          |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10 (7 more)               | b1da98aa0523 | yes    | Debian buster amd64 (20190829_05:24)   | x86_64  | 93.21MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/arm64 (3 more)         | 061bf8e54195 | yes    | Debian buster arm64 (20190829_05:24)   | aarch64 | 89.75MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/armel (3 more)         | f45b56483bcc | yes    | Debian buster armel (20190829_05:53)   | armv7l  | 87.75MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/armhf (3 more)         | 8b3223cb7c36 | yes    | Debian buster armhf (20190829_05:55)   | armv7l  | 88.35MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud (3 more)         | df912811b3c3 | yes    | Debian buster amd64 (20190829_05:24)   | x86_64  | 107.57MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/arm64 (1 more)   | c75bae6267e6 | yes    | Debian buster arm64 (20190829_05:29)   | aarch64 | 103.49MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/armel (1 more)   | a9939000f769 | yes    | Debian buster armel (20190829_06:33)   | armv7l  | 101.43MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/armhf (1 more)   | 8840418a2b4f | yes    | Debian buster armhf (20190829_05:53)   | armv7l  | 101.93MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/i386 (1 more)    | 79ebaba3b386 | yes    | Debian buster i386 (20190829_05:24)    | i686    | 108.85MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/ppc64el (1 more) | dcbfee6585b3 | yes    | Debian buster ppc64el (20190829_05:24) | ppc64le | 109.43MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/s390x (1 more)   | f2d6a7310ae1 | yes    | Debian buster s390x (20190829_05:24)   | s390x   | 101.93MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/i386 (3 more)          | f0bc9e2c267d | yes    | Debian buster i386 (20190829_05:24)    | i686    | 94.41MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/ppc64el (3 more)       | fcf56d73d764 | yes    | Debian buster ppc64el (20190829_05:24) | ppc64le | 94.57MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/s390x (3 more)         | 3481aeba0e06 | yes    | Debian buster s390x (20190829_05:24)   | s390x   | 88.02MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

I have written a post about using cloud-init with LXD containers.

Another use of cloud-init is to set statically the IP address of the container.

Summary

The container images in the images: remote now have support for cloud-init. Instead of adding clound-init support to the existing images, there are new container names with /cloud appended to them, that have cloud-init support.


Dimitri John Ledkov: How to disable TLS 1.0 and TLS 1.1 on Ubuntu

$
0
0
Example of website that only supports TLS v1.0, which is rejected by the client

Overivew

TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1.

As announced on the 15th of October 2018 Apple, Google, and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well.

To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that do not support TLS v1.2.

How to disable TLS v1.0 and TLS v1.1 in Google Chrome on Ubuntu

  1. Create policy directory
    sudo mkdir -p /etc/opt/chrome/policies/managed
  2. Create /etc/opt/chrome/policies/managed/mintlsver.json with
    {
        "SSLVersionMin" : "tls1.2"

How to disable TLS v1.0 and TLS v1.1 in Firefox on Ubuntu

  1. Navigate to about:config in the URL bar
  2. Search for security.tls.version.min setting
  3. Set it to 3, which stand for minimum TLS v1.2

How to disable TLS v1.0 and TLS v1.1 in OpenSSL

  1. Edit /etc/ssl/openssl.cnf
  2. After oid_section stanza add
    # System default
    openssl_conf = default_conf
  3. After oid_section stanza add
    [default_conf]
    ssl_conf = ssl_sect

    [ssl_sect]
    system_default = system_default_sect

    [system_default_sect]
    MinProtocol = TLSv1.2
    CipherString = DEFAULT@SECLEVEL=2
  4.  Save the file

How to disable TLS v1.0 and TLS v1.1 in GnuTLS

  1. Create config directory
    sudo mkdir -p /etc/gnutls/
  2. Create /etc/gnutls/default-priorities with
    SYSTEM=SECURE192:-VERS-ALL:+VERS-TLS1.3:+VERS-TLS1.2 
After performing above tasks most common applications will use TLS v1.2+

I have set these defaults on my systems, and I occasionally hit websites that only support TLS v1.0 and I report them. Have you found any websites and systems you use that do not support TLS v1.2 yet?

Stephen Michael Kellat: Media Operations Proposal

$
0
0

After some editing and then a few times around the LaTeX compilation merry-go-round, a likely OperationsProposal.pdf is appended hereto. A goal is to avoid recreating Fernwood 2 Night but I think we can manage that. We're also not re-creating Live From Here either. After all we are not trying to create a comedy but rather an actual news program.

As a digression, I will point out that with Hurricane Dorian still a threat the National Hurricane Center has reactivated their semi-experimental, still irregular podcast at https://www.nhc.noaa.gov/audio/ with a feed address of https://www.nhc.noaa.gov/audio/podcast.xml. A very good source for podcast discovery remains gpodder.net and I encourage its use. Unfortunately the discovery platform still needs a maintainer.

Jonathan Carter: Free Software Activities (2019-08)

$
0
0

Ah, spring time at last. The last month I caught up a bit with my Debian packaging work after the Buster freeze, release and subsequent DebConf. Still a bit to catch up on (mostly kpmcore and partitionmanager that’s waiting on new kdelibs and a few bugs). Other than that I made two new videos, and I’m busy with renovations at home this week so my home office is packed up and in the garage. I’m hoping that it will be done towards the end of next week, until then I’ll have little screen time for anything that’s not work work.

2019-08-01: Review package hipercontracer (1.4.4-1) (mentors.debian.net request) (needs some work).

2019-08-01: Upload package bundlewrap (3.6.2-1) to debian unstable.

2019-08-01: Upload package gnome-shell-extension-dash-to-panel (20-1) to debian unstable.

2019-08-01: Accept MR!2 for gamemode, for new upstream version (1.4-1).

2019-08-02: Upload package gnome-shell-extension-workspaces-to-dock (51-1) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-hide-activities (0.00~git20131024.1.6574986-2) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-trash (0.2.0-git20161122.ad29112-2) to debian unstable.

2019-08-04: Upload package toot (0.22.0-1) to debian unstable.

2019-08-05: Upload package gamemode (gamemode-1.4.1+git20190722.4ecac89-1) to debian unstable.

2019-08-05: Upload package calamares-settings-debian (10.0.24-2) to debian unstable.

2019-08-05: Upload package python3-flask-restful (0.3.7-3) to debian unstable.

2019-08-05: Upload package python3-aniso8601 (7.0.0-2) to debian unstable.

2019-08-06: Upload package gamemode (1.5~git20190722.4ecac89-1) to debian unstable.

2019-08-06: Sponsor package assaultcube (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-06: Sponsor package assaultcube-data (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-07: Request more info on Debian bug #825185 (“Please which tasks should be installed at a default installation of the blend”).

2019-08-07: Close debian bug #689022 in desktop-base (“lxde: Debian wallpaper distorted on 4:3 monitor”).

2019-08-07: Close debian bug #680583 in desktop-base (“please demote librsvg2-common to Recommends”).

2019-08-07: Comment on debian bug #931875 in gnome-shell-extension-multi-monitors (“Error loading extension”) to temporarily avoid autorm.

2019-08-07: File bug (multimedia-devel)

2019-08-07: Upload package python3-grapefruit (0.1~a3+dfsg-7) to debian unstable (Closes: #926414).

2019-08-07: Comment on debian bug #933997 in gamemode (“gamemode isn’t automatically activated for rise of the tomb raider”).

2019-08-07: Sponsor package assaultcube-data (1.2.0.2.1-2) for debian unstable (e-mail request).

2019-08-08: Upload package calamares (3.2.12-1) to debian unstable.

2019-08-08: Close debian bug #32673 in aalib (“open /dev/vcsa* write-only”).

2019-08-08: Upload package tanglet (1.5.4-1) to debian unstable.

2019-08-08: Upload package tmux-theme-jimeh (0+git20190430-1b1b809-1) to debian unstable (Closes: #933222).

2019-08-08: Close debian bug #927219 (“amdgpu graphics fail to be configured”).

2019-08-08: Close debian bugs #861065 and #861067 (For creating nextstep task and live media).

2019-08-10: Sponsor package scons (3.1.1-1) for debian unstable (mentors.debian.org request) (Closes RFS: #932817).

2019-08-10: Sponsor package fractgen (2.1.7-1) for debian unstable (mentors.debian.net request).

2019-08-10: Sponsor package bitwise (0.33-1) for debian unstable (mentors.debian.net request). (Closes RFS: #934022).

2019-08-10: Review package python-pyspike (0.6.0-1) (mentors.debian.net request) (needs some additional work).

2019-08-10: Upload package connectagram (1.2.10-1) to debian unstable.

2019-08-11: Review package bitwise (0.40-1) (mentors.debian.net request) (need some further work).

2019-08-11: Sponsor package sane-backends (1.0.28-1~experimental1) to debian experimental (mentors.debian.net request).

2019-08-11: Review package hcloud-python (1.4.0-1) (mentors.debian.net).

2019-08-13: Review package bitwise (0.40-1) (e-mail request) (needs some further work).

2019-08-15: Sponsor package bitwise (0.40-1) for debian unstable (email request).

2019-08-19: Upload package calamares-settings-debian (10.0.20-1+deb10u1) to debian buster (CVE #2019-13179).

2019-08-19: Upload package gnome-shell-extension-dash-to-panel (21-1) to debian unstable.

2019-08-19: Upload package flask-restful (0.3.7-4) to debian unstable.

2019-08-20: Upload package python3-grapefruit (0.1~a3+dfsg-8) to debian unstable (Closes: #934599).

2019-08-20: Sponsor package runescape (0.6-1) for debian unstable (mentors.debian.net request).

2019-08-20: Review package ukui-menu (1.1.12-1) (needs some mor work) (mentors.debian.net request).

2019-08-20: File ITP #935178 for bcachefs-tools.

2019-08-21: Fix two typos in bcachefs-tools (Github bcachefs-tools PR: #20).

2019-08-25: Published Debian Package of the Day video #60: 5 Fonts (highvoltage.tv / YouTube).

2019-08-26: Upload new upstream release of speedtest-cli (2.1.2-1) to debian unstable (Closes: #934768).

2019-08-26: Upload new package gnome-shell-extension-draw-on-your-screen to NEW for debian untable. (ITP: #925518)

2019-08-27: File upstream bug for btfs so that python2 depencency can be dropped from Debian package (BTFS: #53).

2019-08-28: Published Debian Package Management #4: Maintainer Scripts (highvoltage.tv / YouTube).

2019-08-28: File upstream feature request in Calamares unpackfs module to help speed up installations (Calamares: #1229).

2019-08-28: File upstream request at smlinux/rtl8723de driver for license clarification (RTL8723DE: #49).

Sam Hewitt: Suspending Patreon

$
0
0

I originally wrote a version of this post on Patreon itself but suspending my page hides my posts on there. Oops.

There’s been a lot of change for me over the past year or two, in real life and as a member of the free software community (like my recent joining of Purism), that has shifted my focus away from why I originally launched a Patreon, so I felt it was time to deactivate my creator page.

The support I got on Patreon for my humble projects and community participation over the many months my page was active will always be much appreciated! Having a Patreon (or some other kind of small recurring financial support service) as a free software contributor fueled not only my ability to contribution but my enthusiasm for free software. Support for small independent free software developers, designers, contributors and projects from folks in the community (not just through things like Patreon) goes a long way and I look forward to shifting into a more supportive role myself.

I’m going forward with gratitude to the community, so much thanks to all the folks who were my patrons. Go forth and spread the love! ❤️

Ubuntu Blog: Building a better TurtleBot3

$
0
0

TurtleBot3 was released in 2017 and is positioned as a low-cost, open-source robot kit. For new owners of the TurtleBot3, there are various resources online that will assist you with building your brand new TurtleBot3 out of the box. One such example is the official TurtleBot3 instructional video. While it is a great video to help you with the assembly, there are a few points on practicality that you can take into account. Those few points are the subject of this blog post.

Better Raspberry Pi location placement for accessibility

The most optimal location for the Raspberry Pi board would be aft-left of the robot. That way you will have easy access to USB ports from the back, HDMI port from the left and SD card from the left-mid section.

But unfortunately the USB cable that comes with the robot is left angled, and the MicroUSB side is right-angled. Those circumstances will prevent you from placing the board aft-left, so I recommend settling on the next-best option, aft-right.

raspberry pi placementRaspberry Pi placement

Once the TurtleBot3 is assembled, having placed the Raspberry Pi board aft-right, it will allow you to access the USB ports from the back.

raspberry pi accessRaspberry Pi access

You’ll also be able to access the SD card for fast switching of the operating system through the mid-right side. You can pinch the SD card between the index finger and thumb and can pop it out of its slot quickly. If you can’t reach with your fingers, then you can use a pair of needle-nose pliers.

sd card accessSD card access

I do have a word of caution on the needle-nose pliers use. The level of feel and ability with the pliers is much reduced compared to your fingers. Squeeze and pull gently only when the pliers’ jaws are fully parallel to the SD card, or you might break it and end up with two SD cards.

With aft-right placement, the HDMI port is, unfortunately, facing the interior of the robot, but there is a silver lining. The HDMI cable is usually very thick and inflexible, and therefore, you can slide the cable in from the left side of the TurtleBot3 like a stick. You should be able to plug it into the Raspberry Pi board without too much trouble.

I should also mention that if you intend to use the Raspberry Pi camera, make sure to use the long camera cable to reach the board in the back.

LIDAR USB board placement for better access

The best location for the LIDAR rotating scanner is on the top platform for a 360-degree unobstructed view, but the USB output connects to the Raspberry Pi USB port. That creates a challenge when removing the top platform from the TurtleBot3. To alleviate that problem, I recommend placing the LIDAR circuit board one level below the scanner and close to a tile hole.

lidar circuit board locationLIDAR circuit board location

That way, when you need to separate the top platform from the bottom, the platforms are easily detachable. You can slide your left index finger from the top and your right index finger from the side and unclip the LIDAR data/power line header from the circuit board. Thus electrically and physically separating the top and the middle layer.

lidar circuit board accessLIDAR circuit board access

Development environment repeatability

While this is not necessarily related to assembly instructions, it is something that can make your life easier when working with TurtleBot3. If you are experimenting a lot with various ROS components, monitoring logs, etc., you find yourself needing a lot of terminals. Wouldn’t it be nice to automate some of this work? Maybe a single command that could start all the necessary terminals and ROS components?

Luckily, a utility called tmuxinator fits the bill nicely – a single command ‘tmuxinator start ros’ builds up a terminal with various open consoles. Tmuxinator is a kind of scripting wrapper for the Linux utility called tmux, and you will need to install both with ‘sudo apt install tmuxinator tmux’.

In the example below, you’ll use tmuxinator to create two windows. First one is called ros_local. In this window, you start ROS components that are local to your development laptop. Panes 1, 2 & 3 respectively launch roscore, turtlebot3_remote.launch & rviz

robot operating system localros_local window

You’ll use the second window called ros_remote to log-on to the TurtleBot3 through ssh and start turtlebot3_robot.launch and turtlebot3_rpicamera.launch. The profile also creates an extra ssh session where you can type in random commands. Since tmuxinator creates a regular tmux session all tmux key bindings apply, you can create additional windows or panes. Run ‘man tmux’ to see all key bindings tmux supports and how to customise it further.

robot operating system remoteros_remote window

Running ‘tmuxinator start ros’ starts a profile called ‘ros’ that we have to create. To do that run, ‘tmuxinator new ros’. This command creates a YAML file with tmuxinator defaults.

tmuxinator defaultstmuxinator defaults

We can delete everything past line 5 and add our custom windows and panes definition. Since this is in YAML format, make sure to pay special attention to your whitespaces. If you make a whitespace mistake, tmuxinator can’t parse it, and it won’t start your new profile.

Take a look at the ros.yaml contents below; it creates two windows, ros_local and ros_remote. Then assigns a pane layout, where new panes should go, and create three panes under each window.

And finally, below the pane names (roscore, remote_state, rviz) are bash commands that tmuxinator will run on startup for you. For example, under the ros_remote window, it uses ssh as the first command to log into TurtleBot3 and execute the rest of the commands on the bot.

# ~/.tmuxinator/ros.yml
name: ros 
root: ~/
windows:
  - ros_local:
      layout: even-vertical
      panes:
        - roscore:
          - cd ~/catkin_ws/
          - roscore
        - remote_state:
          - cd ~/catkin_ws/
          - sleep 2
          - roslaunch turtlebot3_bringup turtlebot3_remote.launch
        - rviz:
          - cd ~/catkin_ws/
          - sleep 10
          - rosrun rviz rviz -d `rospack find turtlebot3_description`/rviz/model.rviz
  - ros_remote:
      layout: even-vertical
      panes:
        - robot:
          - ssh pi@turtle
          - cd ~/catkin_ws/
          - sleep 2
          - roslaunch turtlebot3_bringup turtlebot3_robot.launch
        - lidar:
          - ssh pi@turtle
          - cd ~/catkin_ws/
          - sleep 2
        - rpicam:
          - ssh pi@turtle
          - cd ~/catkin_ws/
          - sleep 7
          - roslaunch turtlebot3_bringup turtlebot3_rpicamera.launch

Since this is a tmux session, you can use it like any other tmux session, where you can disconnect from it and connect to it later. Or you can destroy the session and have tmux terminate all the programs it ran. Getting everything back is as easy as running ‘tmuxinator start ros’ and for more information on how to use tmux, see tmux man page.

Conclusion

The interlocking jigsaw puzzle-like tiles of TurtleBot3 allow for extensive customisation. You don’t necessarily even need to follow any instructions; you are free to make it your own. Let me know what your experience is with TurtleBot3 assembly. I am particularly interested in what kind of customisations or ‘gotchas’ you have come up with for your projects.

Viewing all 17727 articles
Browse latest View live