Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Nish Aravamudan: [USD #1] Ubuntu Server Dev git Importer

$
0
0

This is the first in a series of posts about the Ubuntu Server Team’s git importer (usd). There is a lot to discuss: why it’s necessary, the algorithm, using the tooling for doing merges, using the tooling for contributing one-off fixes, etc. But for this post, I’m just going to give a quick overview of what’s available and will follow-up in future posts with those details.

The importer was first announced here and then a second announcement was made here. But both those posts are pretty out-of-date now… I have written a relatively current guide to merging which does talk about the tooling here, and much of that content will be re-covered in future blog posts.

The tooling is browse-able here and can be obtained via

git clone https://git.launchpad.net/usd-importer

This will provide a usd command in the local repository’s bin directory. That command resembles git as being the launching point for interacting with imported trees — both for importing them and for using them:

usage: usd [-h] [-P PARENTFILE] [-L PULLFILE]
 build|build-source|clone|import|merge|tag ...

Ubuntu Server Dev git tool

positional arguments:
 build|build-source|clone|import|merge|tag
 
 build - Build a usd-cloned tree with dpkg-buildpackage
 build-source - Build a source package and changes file
 clone - Clone package to a directory
 import - Update a launchpad git tree based upon the state of the Ubuntu and Debian archives
 merge - Given a usd-import'd tree, assist with an Ubuntu merge
 tag - Given a usd-import'd tree, tag a commit respecting DEP14

...

More information is available at https://wiki.ubuntu.com/UbuntuDevelopment/Merging/GitWorkflow.

You can run usd locally without arguments to view the full help.

Imported trees currently live here. This will probably change in the future as we work with the Launchpad team to integrate the functionality. As you can see, we have 411 repositories currently (as of this post) and that’s a consequence of having the importer running automatically. Every 20 minutes or so, the usd-cron script checks if there are any new publishes of source packages listed in usd-cron-packages.txt in Debian or Ubuntu and runs usd import on them, if so.

I think that’s enough for the first post! Just browsing the code and the imported trees is pretty interesting (running gitk on an imported repository gives you a very interesting visual of Ubuntu development). I’ll dig into details in the next post (probably of many).



Simon Quigley: What I've Been Up To

$
0
0

It's been a long time since I've blogged about anything. I've been busy with lots of different things. Here's what I've been up to.

Lubuntu

First off, let's talk about Lubuntu. A couple different actions (or lack of) have happened.

Release Management

Walter Lapchynski recently passed on the position of Lubuntu Release Manager to me. He has been my mentor ever since I joined Ubuntu/Lubuntu in July of 2015, and I'm honored to be able to step up to this role.

Here's what I've done as Release Manager from then to today:

Sunsetted Lubuntu PowerPC daily builds for Lubuntu Zesty Zapus.

This was something we had been looking at for a while, and it just happened to happen immediately after I became Release Manager. It wasn't really our hand pushing this forward, per se. The Ubuntu Technical Board voted to end the PowerPC architecture in the Ubuntu archive for Zesty before this, and I thought it was a good time to carry this forward for Lubuntu.

Helped get Zesty Zapus Beta 1 out the door for Ubuntu and for Lubuntu as well.

Discussed Firefox and ALSA's future in Ubuntu.

When Firefox 52 was released into xenial-updates, it broke Firefox's sound functionality for Lubuntu 16.04 users, as Lubuntu 16.04 LTS uses ALSA, and despite what a certain Ubuntu site says, was because it was disabled in the default build of Firefox, not completely removed. I won't get into it (I don't want to start a flame war), but this wasn't really something Lubuntu messed up, as the original title (and content) of the article ("Lubuntu users are left with no sound after upgrading Firefox") implied.

I recently brought this up for discussion (I didn't know that part just mentioned when I sent the email linked above), and for the time being it will be re-enabled in the Ubuntu build. As we continue to update to future Firefox releases, this will result in bitrot, so eventually we need to switch off of Firefox in the future.

I'm personally against switching to Chromium, as it's not lightweight and it's a bit bloated. I have also recently started using Firefox, and it's been a lot faster for me than Chromium was. But, that's a discussion for another day, and within the next month, I will most likely bring it up for discussion on the lubuntu-devel mailing list. I'll probably write another blog post when I send that email, but we'll see.

Got Zesty Zapus Final Beta/Beta 2 out the door for Lubuntu.

LXQt

Lubuntu 17.04 will not ship with LXQt.

That's basically the bottom line here. ;)

I've been working to start a project on Launchpad that will allow me to upload packages to a PPA and have it build an image from that, but I'm still waiting to hear back on a few things for that.

You may be asking, "So why don't we have LXQt yet?" The answer to that question is, I've been busy with a new job, school, and many other things in life and haven't gotten the chance to heavily work on it much. I have a plan in mind, but it all depends on my free time from this point on.

That being said, if you want to get involved, please don't be afraid to send an email to the Lubuntu developers mailing list. We're all really friendly, and we'll be very willing to get you started, no matter your skill level. This is exactly the reason why LXQt is taking so long. It's because I'm pretty much the only one working on this specific subproject, and I don't have all the time in the world.

Donations

While this isn't specifically highlighting any work I've done in this area, I'd like to provide some information on this.

Lubuntu has been looking for a way to accept donations for a long time. Donations to Lubuntu would help fund:

  • Work on Lubuntu (obviously).
  • Work on upstream projects we use and install by default (LXQt, for example, in the future).
  • Travel to conferences for Lubuntu team members.
  • Much more...

A goal that I specifically have with this is to be transparent as possible about any donations we recieve and specifically where they go. But, we have to get it set up first.

While I am still a minor in the country I reside and (most likely) cannot make any legal decisions about funds (yet), Walter has been looking for a lawyer to help sort out something along the lines of a "Lubuntu Foundation" (or something like that) to manage the funds in a way that doesn't give only one person control. So if you know a lawyer (or are one) that would be willing to help us set that up, please contact me or Walter when you can.

Ubuntu Weekly Newsletter

Before Issue 500 of the Ubuntu Weekly Newsletter, Elizabeth K. Joseph (Lyz) was in the driver's seat of the team. She helped get every release out on time every week without fail (with the exception of two-week issues, but that's irrelevant right now). Before I go on, I just want to say a big thank you to Lyz. Your contributions were really appreciated. :)

She had taken the time to show me not only how to write summaries in the very beginning, but how to edit, and even publish a whole issue. I'm very grateful for the time she spent with me, and I can't thank her enough.

Fast forward to 501, I ended up stepping up to get the email sent to summary writers and ultimately the whole issue published. I was nervous, as I had never published an issue on my own (Lyz and I had always split the tasks), but I successfully pressed the right buttons and got it out. Before publishing, I had some help from Paul White (his last issue contributing, thank you as well) and others to get summaries done and the issue edited.

Since then, I've pretty much stepped up to fill in the gaps for Lyz. I wouldn't necessarily consider anything official yet, but for now, this is where I'll stay.

But, it's tough to get issues of UWN out. I have a new respect for everything Lyz did and all of the hard work she put into each issue. This is a short description of what happens each week:

  • Collect issues during the week, put it on the Google Document.
  • On Friday, clean up the doc and send out to summary writers.
  • Over the weekend, people write summaries.
  • On Sunday, it's copied to the wiki, stats are added, and it's sent out to editors.
  • On Monday, it's published.

Wash, rinse, and repeat.

It's incredibly easy to write summaries. In fact, the email was just sent out earlier to summary writers. If you want to take a minute or two (that's all it takes for contributing a summary) to help us out, hop on to the Google Document, refer to the style guidelines linked at the top, and help us out. Then, when you're done, put your name on the bottom if you want to be credited. Every little bit helps!

Other things

About this website

  • I think I can finally implement a comments section so people can leave easy feedback. This is a huge step forward, given that I write the HTML for this website completely from scratch.
  • I wrote a hacky Python script that I can use for writing blog posts. I can just write everything in Markdown, and it will do all the magic bits. I manually inspect it, then just git add, git commit, and git push it.
  • I moved the website to GitLab, and with the help of Thomas Ward, got HTTPS working.

For the future

  • I've been inspired by some of the Debian people blogging about their monthly contributions to FLOSS, so I'm thinking that's what I'll do. It'll be interesting to see what I actually do in a month's time... who knows what I'll find out? :)

So that's what I've been up to. :)

Nathan Haines: Winners of the Ubuntu 17.04 Free Culture Showcase

$
0
0

Spring is here and the release of Ubuntu 17.04 is just around the corner. I've been using it for two weeks and I can't say I'm disappointed! But one new feature that never disappoints me is appearance of the community wallpapers that were selected from the Ubuntu Free Culture Showcase!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 17.04, 96 images were submitted to the Ubuntu 17.04 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

But now the results are in, and the top choices, voted on by certain members of the Ubuntu community, and I'm proud to announce the winning images that will be included in Ubuntu 17.04:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade or install Ubuntu 17.04 on April 13th.

Jo Shields: Mono repository changes, beginning Mono vNext

$
0
0

Up to now, Linux packages on mono-project.com have come in two flavours – RPM built for CentOS 7 (and RHEL 7), and .deb built for Debian 7. Universal packages that work on the named distributions, and anything newer.

Except that’s not entirely true.

Firstly, there have been “compatibility repositories” users need to add, to deal with ABI changes in libtiff, libjpeg, and Apache, since Debian 7. Then there’s the packages for ARM64 and PPC64el – neither of those architectures is available in Debian 7, so they’re published in the 7 repo but actually built on 8.

A large reason for this is difficulty in our package publishing pipeline – apt only allows one version-architecture mix in the repository at once, so I can’t have, say, 4.8.0.520-0xamarin1 built on AMD64 on both Debian 7 and Ubuntu 16.04.

We’ve been working hard on a new package build/publish pipeline, which can properly support multiple distributions, based on Jenkins Pipeline. This new packaging system also resolves longstanding issues such as “can’t really build anything except Mono” and “Architecture: All packages still get built on Jo’s laptop, with no public build logs”

So, here’s the old build matrix:

DistributionArchitectures
Debian 7ARM hard float, ARM soft float, ARM64 (actually Debian 8), AMD64, i386, PPC64el (actually Debian 8)
CentOS 7AMD64

And here’s the new one:

DistributionArchitectures
Debian 7ARM hard float (v7), ARM soft float, AMD64, i386
Debian 8ARM hard float (v7), ARM soft float, ARM64, AMD64, i386, PPC64el
Raspbian 8ARM hard float (v6)
Ubuntu 14.04ARM hard float (v7), ARM64, AMD64, i386, PPC64el
Ubuntu 16.04ARM hard float (v7), ARM64, AMD64, i386, PPC64el
CentOS 6AMD64, i386
CentOS 7AMD64

The compatibility repositories will no longer be needed on recent Ubuntu or Debian – just use the right repository for your system. If your distribution isn’t listed… sorry, but we need to draw a line somewhere on support, and the distributions listed here are based on heavy analysis of our web server logs and bug requests.

You’ll want to change your package manager repositories to reflect your system more accurately, once Mono vNext is published. We’re debating some kind of automated handling of this, but I’m loathe to touch users’ sources.list without their knowledge.

CentOS builds are going to be late – I’ve been doing all my prototyping against the Debian builds, as I have better command of the tooling. Hopefully no worse than a week or two.

Timo Aaltonen: Mesa 17.0.2 for 16.04 & 16.10

$
0
0

Hi, Mesa 17.0.2 backports can now be installed from the updates ppa. Have fun testing, and feel free to file any bugs you find using ‘apport-bug mesa’.


Stephan Adig: SREs Needed (Berlin Area)

$
0
0
SREs Needed (Berlin Area)

We are looking for skilled people for SRE / DevOPS work.

So without further ado, here is the job offering :)

SRE / DevOps

Do you want to be part of an engineering team that focus on building solutions that maximizes use of emerging technologies to transform our business to achieve superior value and scalability? Do you want a career opportunity that combines your skills as an engineer and passion for video gaming? Are you fascinated by technologies behind the internet and cloud computing? If so, join us!

As a part of Sony Computer Entertainment, Gaikai is leading the cloud gaming revolution, putting console-quality video games on any device, from TVs to consoles to mobile devices and beyond.

Our SRE's focus is on three things: overall ownership of production, production code quality, and deployments.

The succesfull candidate, will be self-directed and able to participate in the decision making process at various levels.

We expect our SREs to have opinions on the state of our service, and provide critical feedback during various phases of the operational lifecycle. We are engaged throughout the S/W development lifecycle, ensuing the operational readiness and stability of our service.

Requirements

Minimum of 5+ years working experience in Software Development and/or Linux Systems Administration role.
Strong interpersonal, written and verbal communication skills.
Available to participate in a scheduled on-call rotation.

Skills & Knowledge

Proficient as a Linux Production Systems Engineer, with experience managing large scale Web Services infrastructure.
Development experience in one or more of the following programming languages:

  • Python (preferred)
  • Bash, Java, Node.js, C++ or Ruby

In addition, experience with one or more of the following:

  • NoSQL at scale (eg Hadoop, Mongo clusters, and/or sharded Redis)
  • Event Aggregation technologies. (eg. ElasticSearch)
  • Monitoring & Alerting, and Incident Management toolsets
  • Virtual infrastructure (deployment and management) at scale
  • Release Engineering (Package management and distribution at scale)
  • S/W Performance analysis and load testing (QA or SDET experience: a plus)

Location

  • Berlin, Germany

Who is hiring?

  • Gaikai / Sony Interactive Entertainment

When you are on LinkedIn, you can directly go and apply for this job.
If you want, but you are not forced to, please refer to me.

Ubuntu Insights: Bare Metal Server Provisioning is Evolving the HPC Market

$
0
0

Globe Picture Andrius Aleksandravičius

In the early days of High Performance Computing (HPC), ‘Big Data’ was just called ‘Data’ and organizations spent millions of dollars to buy mainframes or large data processing/warehousing systems just to gain incremental improvements in the manipulation of information. Today, IT Pros and systems administrators are under more pressure than ever to make the most of these legacy bare metal hardware investments. However, with more and more compute workloads moving to the public cloud, and the natural pressure to do more with less, IT pros are finding it difficult to find balance with existing infrastructure and the new realities of the cloud.  Until now, these professionals have not found the balance needed to achieve more efficiency while using what they already have in-house.

Businesses have traditionally made significant investments in hardware. However, as the cloud has disrupted traditional business models, IT Pros needed to find a way to combine the flexibility of the cloud with the power and security of their bare metal servers or internal hardware infrastructure. Canonical’s MAAS (Metal as a Service) solution allows IT organizations to discover, commission, and (re)deploy bare metal servers within most operating system environments like Windows, Linux, etc.. As new services and applications are deployed, MAAS can be used to dynamically re-allocate physical resources to match workload requirements. This means organizations can deploy both virtual and physical machines across multiple architectures and virtual environments, at scale.

MAAS improves the lives of IT Pros!

MAAS was designed to make complex hardware deployments faster, more efficient, and with more flexibility. One of the key areas where MAAS has found significant success is in High Performance Computing (HPC) and Big Data. HPC relies on aggregating computing power to solve large data-centric problems in subjects like banking, healthcare, engineering, business, science, etc. Many large organizations are leveraging MAAS to modernize their OS deployment toolchain (a set of tool integrations that support the development, deployment, operations tasks) and lower server provisioning times.

These organizations found their tools were outdated thereby prohibiting them from deploying large numbers of servers. Server deployments were slow, modular/monolithic, and could not integrate with tools, drivers, and APIs. By deploying MAAS they were able to speedup their server deployment times as well as integrate with their orchestration platform and configuration management tools like Chef, Ansible, and Puppet, or software modeling solutions like Canonical’s Juju.

For example, financial institutions are using MAAS to deploy Windows servers in their data centre during business hours to support applications and employee systems. Once the day is done, they use MAAS to redeploy the data centre server infrastructure to support Ubuntu Servers and perform batch processing and transaction settlement for the day’s activities. In the traditional HPC world, these processes would take days or weeks to perform, but with MAAS, these organizations are improving their efficiency, reduce infrastructure costs by using existing hardware, while giving these institutions the ability to close out the day’s transitions faster and more efficiently thus giving financial executives the ability to spend more time with their families and having bragging rights at cocktail parties.

HPC is just one great use cases for MAAS where companies can recognize immediate value from their bare metal hardware investments. Over the next weeks we will go deeper into the various use cases for MAAS, but in the meantime, we invite you to try MAAS for yourself on any of the major public clouds using Conjure Up.

If you would like to learn more about MAAS or see a demo, contact us directly.

Ubuntu Insights: Job Concurrency in Kubernetes: LXD & CPU pinning to the rescue

$
0
0

A few days ago, someone shared with me a project to run video transcoding jobs in Kubernetes.

During her tests, made on a default Kubernetes installation on bare metal servers with 40 cores & 512GB RAM, she allocated 5 full CPU cores to each of the transcoding pods, then scaled up to 6 concurrent tasks per node, thus loading the machines at 75% (on paper). The expectation was that these jobs would run at the same speed as a single task.

The result was a slightly underwelming: while concurrency was going up, performance on individual task was going down. At maximum concurrency, they actually observed 50% decrease on single task performance.
I did some research to understand this behavior. It is referenced in several Kubernetes issues such as #10570,#171, in general via a Google Search. The documentation itself sheds some light on how the default scheduler work and why the performance can be impacted by concurrency on intensive tasks.

There are different methods to allocate CPU time to containers:

  • CPU Pining: each container gets a set of cores

CPU Pinning
If the host has enough CPU cores available, allocate 5 “physical cores” that will be dedicated to this pod/container

  • Temporal slicing: considering the host has N cores collectively representing an amount of compute time which you allocate to containers. 5% of CPU time means that for every 100ms, 5ms of compute are dedicated to the task.

Time Slicing
Temporal slicing, each container gets allocated randomly on all nodesObviously, pining CPUs can be interesting for some specific workloads but has a big problem of scale for the simple reason you could not run more pods than you have cores in your cluster.

As a result, Docker defaults to the second one, which also ensures you can have less than 1 CPU allocated to a container.

This has an impact on performance which also happens in HPC or any CPU intensive task.

Can we mitigate this risk? Maybe. Docker provides the cpuset option at the engine level. It’s not however leveraged by Kubernetes. However, LXD containers have the ability to be pined to physical cores via cpusets, in an automated fashion, as explained in this blog post by @stgraber.

This opens 2 new options for scheduling our workloads:

  • Slice up our hosts in several LXD Kubernetes Workers and see if pining CPUs for workers can help up;
  • Include a “burst” option with the native Kubernetes resource primitives, and see if that can help maximise compute throughput in our cluster.

Let’s see how these compare!

TL;DR

You don’t all have the time to read the whole thing so in a nutshell:

  • If you always allocate less than 1 CPU to your pods, concurrency doesn’t impact CPU-bound performance;
  • If you know in advance your max concurrency and it is not too high, then adding more workers with LXD and CPU pinning them always gets you better performance than native scheduling via Docker;
  • The winning strategy is always to super provision CPU limits to the max so that every bit of performance is allocated instantly to your pods

Note: these results are in AWS, where there is a hypervisor between the metal and the units. I am waiting for hardware with enough cores to complete the task. If you have hardware you’d like to throw at this, be my guest and I’ll help you run the tests.

The Plan

In this blog post, we will do the following:

  • Setup various Kubernetes clusters: pure bare metal, pure cloud, in LXD containers with strict CPU allocation.
  • Design a minimalistic Helm chart to easily create parallelism
  • Run benchmarks to scale concurrency (up to 32 threads/node)
  • Extract and process logs from these runs to see how concurrency impacts performance per core

Requirements

For this blog post, it is assumed that:

  • You are familiar with Kubernetes
  • You have notions of Helm charting or of Go Templates, as well as using Helm to deploy stuff
  • Having preliminary knowledge of the Canonical Distribution of Kubernetes (CDK) is a plus, but not required.
  • Downloading the code for this post:
git clone https://github.com/madeden/blogposts
cd blogposts/k8s-transcode

Methodology

Our benchmark is a transcoding task. It uses a ffmpeg workload, designed to minimize time to encode by exhausting all the resources allocated to compute as fast as possible. We use a single video for the encoding, so that all transcoding tasks can be compared. To minimize bottlenecks other than pure compute, we use a relatively low bandwidth video, stored locally on each host.

The transcoding job is run multiple times, with the following variations:

  • CPU allocation from 0.1 to 7 CPU Cores
  • Memory from 0.5 to 8GB RAM
  • Concurrency from 1 to 32 concurrent threads per host
  • (Concurrency * CPU Allocation) never exceeds the number of cores of a single host

We measure for each pod how long the encoding takes, then look at correlations between that and our variables.

Charting a simple transcoder

Transcoding with ffmpeg and Docker

When I want to do something with a video, the first thing I do is call my friend Ronan. He knows everything about everything for transcoding (and more)!

So I asked him something pretty straightforward: I want the most CPU intensive ffmpeg transcoding one liner you can think of.

He came back (in less than 30min) with not only the one liner, but also found a very neat docker image for it, kudos to Julien for making this. All together you get:


docker run --rm -v $PWD:/tmp jrottenberg/ffmpeg:ubuntu \
  -i /tmp/source.mp4 \
  -stats -c:v libx264 \
  -s 1920x1080 \
  -crf 22 \
  -profile:v main \
  -pix_fmt yuv420p \
  -threads 0 \
  -f mp4 -ac 2 \
  -c:a aac -b:a 128k \
  -strict -2 \
  /tmp/output.mp4

The key of this setup is the -threads 0 which tells ffmpeg that it’s an all you can eat buffet.
For test videos, HD Trailers or Sintel Trailers are great sources. I’m using a 1080p mp4 trailer for source.

Helm Chart

Transcoding maps directly to the notion of Job in Kubernetes. Jobs are batch processes that can be orchestrated very easily, and configured so that Kubernetes will not restart them when the job is done. The equivalent to Deployment Replicas is Job Parallelism.

To add concurrency, I initially use this notion. It proved a bad approach, making things more complicated than necessary to analyze the output logs. So I built a chart that creates many (numbered) jobs each running a single pod, so I can easily track them and their logs.


{{- $type := .Values.type -}}
{{- $parallelism := .Values.parallelism -}}
{{- $cpu := .Values.resources.requests.cpu -}}
{{- $memory := .Values.resources.requests.memory -}}
{{- $requests := .Values.resources.requests -}}
{{- $multiSrc := .Values.multiSource -}}
{{- $src := .Values.defaultSource -}}
{{- $burst := .Values.burst -}}
---
{{- range $job, $nb := until (int .Values.parallelism) }}
apiVersion: batch/v1
  kind: Job
  metadata:
   name: {{ $type | lower }}-{{ $parallelism }}-{{ $cpu | lower }}-{{ $memory | lower }}-{{ $job }}
  spec:
   parallelism: 1
   template:
   metadata:
   labels:
     role: transcoder
  spec:
    containers:
    - name: transcoder-{{ $job }}
      image: jrottenberg/ffmpeg:ubuntu
      args: [
        "-y",
        "-i", "/data/{{ if $multiSrc }}source{{ add 1 (mod 23 (add 1 (mod $parallelism (add $job 1)))) }}.mp4{{ else }}{{ $src }}{{ end }}",
        "-stats",
        "-c:v",
        "libx264",
        "-s", "1920x1080",
        "-crf", "22",
        "-profile:v", "main",
        "-pix_fmt", "yuv420p",
        "-threads", "0",
        "-f", "mp4",
        "-ac", "2",
        "-c:a", "aac",
        "-b:a", "128k",
        "-strict", "-2",
        "/data/output-{{ $job }}.mp4"
      ]
      volumeMounts:
      - mountPath: /data
        name: hostpath
      resources:
        requests:
{{ toYaml $requests | indent 12 }}
        limits:
          cpu: {{ if $burst }}{{ max (mul 2 (atoi $cpu)) 8 | quote }}{{ else }}{{ $cpu }}{{ end }}
          memory: {{ $memory }}
      restartPolicy: Never
    volumes:
    - name: hostpath
      hostPath:
        path: /mnt
---
{{- end }}

The values.yaml file that goes with this is very very simple:


# Number of // tasks
parallelism: 8
# Separator name
type: bm
# Do we want several input files
# if yes, the chart will use source${i}.mp4 with up to 24 sources
multiSource: false
# If not multi source, name of the default file
defaultSource: sintel_trailer-1080p.mp4
# Do we want to burst. If yes, resource limit will double request.
burst: false
resources:
requests:
cpu: "4"
memory: 8Gi
max:
cpu: "25"

That’s all you need. Of course, all sources are in the repo for your usage, you don’t have to copy paste this.

Creating test files

Now we need to generate a LOT of values.yaml files to cover many use cases. The reachable values would vary depending on your context. My home cluster has 6 workers with 4 cores and 32GB RAM each, so I used

  • 1, 6, 12, 18, 24, 48, 96 and 192 concurrent jobs (up to 32/worker)
  • reverse that for the CPUs (from 3 to 0.1 in case of parallelism=192)
  • 1 to 16GB RAM

In the cloud, I had 16 core workers with 60GB RAM, so I did the tests only on 1 to 7 CPU cores per task.

I didn’t do anything clever here, just a few bash loops to generate all my tasks. They are in the repo if needed.

Deploying Kubernetes

MAAS / AWS

The method to deploy on MAAS is the same I described in my previous blog about DIY GPU Cluster. Once you have MAAS installed and Juju configured to talk to it, you can adapt and use the bundle file in src/juju/ via:

juju deploy src/juju/k8s-maas.yaml

for AWS, use the k8s-aws.yaml bundle, which specifies c4.4xlarge as the default instances. When it’s done, download he configuration for kubectl then initialize Helm with

juju show-status kubernetes-worker-cpu --format json | \
jq --raw-output '.applications."kubernetes-worker-cpu".units | keys[]' | \
xargs -I UNIT juju ssh UNIT "sudo wget https://download.blender.org/durian/trailer/sintel_trailer-1080p.mp4 -O /mnt/sintel_trailer-1080p.mp4"
done
juju scp kubernetes-master/0:config ~/.kube/config
helm init

Variation for LXD

LXD on AWS is a bit special, because of the network. It breaks some of the primitives that are frequently used with Kubernetes such as the proxying of pods, which have to go through 2 layers of networking instead of 1. As a result,

  • kubectl proxy doesn’t work ootb
  • more importantly, helm doesn’t work because it consumes a proxy to the Tiller pod by default
  • However, transcoding doesn’t require network access but merely a pod doing some work on the file system, so that is not a problem.

The least expensive path to resolve the issue I found is to deploy a specific node that is NOT in LXD but a “normal” VM or node. This node will be labeled as a control plane node, and we modify the deployments for tiller-deploy and kubernetes-dashboard to force them on that node. Making this node small enough will ensure no transcoding get ever scheduled on it.

I could not find a way to fully automate this, so here is a sequence of actions to run:

juju deploy src/juju/k8s-lxd-c-.yaml

This deploys the whole thing and you need to wait until it’s done for the next step. Closely monitor juju status until you see that the deployment is OK, but flannel doesn’t start (this is expected, no worries).

Then adjust the LXD profile for each LXD node must to allow nested containers. In a near future (roadmapped for 2.3), Juju will gain the ability to declare the profiles it wants to use for LXD hosts. But for now, we need to build that manually:

NB_CORES_PER_LXD=4 #This is the same number used above to deploy
for MACHINE in 1 2
do
./src/bin/setup-worker.sh ${MACHINE} ${NB_CORES_PER_LXD}
done

If you’re watching juju status you will see that flannel suddenly starts working. All good! Now download he configuration for kubectl then initialize Helm with


juju scp kubernetes-master/0:config ~/.kube/config
helm init

We need to identify the Worker that is not a LXD container, then label it as our control plane node:

kubectl label $(kubectl get nodes -o name | grep -v lxd) controlPlane=true
kubectl label $(kubectl get nodes -o name | grep lxd) computePlane=true

Now this is where it become manual we need to edit successively rc/monitoring-influxdb-grafana-v4, deploy/heapster-v1.2.0.1, deploy/tiller-deploy and deploy/kubernetes-dashboard, to add

nodeSelector:
controlPlane: “true”

in the definition of the manifest. Use

kubectl edit -n kube-system rc/monitoring-influxdb-grafana-v4

After that, the cluster is ready to run!

Running transcoding jobs

Starting jobs

We have a lot of tests to run, and we do not want to spend too long managing them, so we build a simple automation around them

cd src
TYPE=aws
CPU_LIST="1 2 3"
MEM_LIST="1 2 3"
PARA_LIST="1 4 8 12 24 48"
for cpu in ${CPU_LIST}; do
  for memory in ${CPU_LIST}; do
    for para in ${PARA_LIST}; do
      [ -f values/values-${para}-${TYPE}-${cpu}-${memory}.yaml ] && \
        { helm install transcoder --values values/values-${para}-${TYPE}-${cpu}-${memory}.yaml
          sleep 60
          while [ "$(kubectl get pods -l role=transcoder | wc -l)" -ne "0" ]; do
           sleep 15
          done
        }
     done
  done
done

This will run the tests about as fast as possible. Adjust the variables to fit your local environment

First approach to Scheduling

Without any tuning or configuration, Kubernetes makes a decent job of spreading the load over the hosts. Essentially, all jobs being equal, it spreads them like a round robin on all nodes. Below is what we observe for a concurrency of 12.

NAME READY STATUS RESTARTS AGE IP NODE
bm-12–1–2gi-0–9j3sh 1/1 Running 0 9m 10.1.70.162 node06
bm-12–1–2gi-1–39fh4 1/1 Running 0 9m 10.1.65.210 node07
bm-12–1–2gi-11–261f0 1/1 Running 0 9m 10.1.22.165 node01
bm-12–1–2gi-2–1gb08 1/1 Running 0 9m 10.1.40.159 node05
bm-12–1–2gi-3-ltjx6 1/1 Running 0 9m 10.1.101.147 node04
bm-12–1–2gi-5–6xcp3 1/1 Running 0 9m 10.1.22.164 node01
bm-12–1–2gi-6–3sm8f 1/1 Running 0 9m 10.1.65.211 node07
bm-12–1–2gi-7–4mpxl 1/1 Running 0 9m 10.1.40.158 node05
bm-12–1–2gi-8–29mgd 1/1 Running 0 9m 10.1.101.146 node04
bm-12–1–2gi-9-mwzhq 1/1 Running 0 9m 10.1.70.163 node06

The same spread is realized also for larger concurrencies, and at 192, we observe 32 jobs per host in every case. Some screenshots of kubeUI and Grafana of my tests

Jobs in parallel
KubeUI showing 192 concurrent podsHalf a day testing
Compute Cycles at different concurrenciesLXD Fencing CPUs
LXD pining Kubernetes Workers to CPUsK8s full usage
Aoutch! About 100% on the whole machine

Collecting and aggregating results

Raw Logs

This is where it becomes a bit tricky. We could use an ELK stack and extract the logs there, but I couldn’t find a way to make it really easy to measure our KPIs.
Looking at what Docker does in terms of logging, you need to go on each machine and look into /var/lib/docker/containers//-json.log
Here we can see that each job generates exactly 82 lines of log, but only some of them are interesting:

  • First line: gives us the start time of the log
{“log”:”ffmpeg version 3.1.2 Copyright © 2000–2016 the FFmpeg developers\n”,”stream”:”stderr”,”time”:”2017–03–17T10:24:35.927368842Z”}
  • line 13: name of the source
{“log”:”Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘/data/sintel_trailer-1080p.mp4’:\n”,”stream”:”stderr”,”time”:”2017–03–17T10:24:35.932373152Z”}
  • last line: end of transcoding timestamp
{“log”:”[aac @ 0x3a99c60] Qavg: 658.896\n”,”stream”:”stderr”,”time”:”2017–03–17T10:39:13.956095233Z”}

For advanced performance geeks, line 64 also gives us the transcode speed per frame, which can help profile the complexity of the video. For now, we don’t really need that.

Mapping to jobs

The raw log is only a Docker uuid, and does not help use very much to understand to what job it relates. Kubernetes gracefully creates links in /var/log/containers/ mapping the pod names to the docker uuid:

bm-1–0.8–1gi-0-t8fs5_default_transcoder-0-a39fb10555134677defc6898addefe3e4b6b720e432b7d4de24ff8d1089aac3a.log

So here is what we do:
  1. Collect the list of logs on each host:
for i in $(seq 0 1 ${MAX_NODE_ID}); do
  [ -d stats/node0${i} ] || mkdir -p node0${i}
  juju ssh kubernetes-worker-cpu/${i} "ls /var/log/containers | grep -v POD | grep -v 'kube-system'"> stats/node0${i}/links.txt
  juju ssh kubernetes-worker-cpu/${i} "sudo tar cfz logs.tgz /var/lib/docker/containers"
  juju scp kubernetes-worker-cpu/${i}:logs.tgz stats/node0${i}/
  cd node0${i}/
  tar xfz logs.tgz --strip-components=5 -C ./
  rm -rf config.v2.json host* resolv.conf* logs.tgz var shm
  cd ..
done

2. Extract import log lines (adapt per environment for nb of nodes…)

ENVIRONMENT=lxd
MAX_NODE_ID=1
echo "Host,Type,Concurrency,CPU,Memory,JobID,PodID,JobPodID,DockerID,TimeIn,TimeOut,Source" | tee ../db-${ENVIRONMENT}.csv
for node in $(seq 0 1 ${MAX_NODE_ID}); do
  cd node0${node}
  while read line; do
    echo "processing ${line}"
    NODE="node0${node}"
    CSV_LINE="$(echo ${line} | head -c-5 | tr '-' ',')" # node it's -c-6 for logs from bare metal or aws, -c-5 for lxd
    UUID="$(echo ${CSV_LINE} | cut -f8 -d',')"
    JSON="$(sed -ne '1p' -ne '13p' -ne '82p' ${UUID}-json.log)"
    TIME_IN="$(echo $JSON | jq --raw-output '.time' | head -n1 | xargs -I {} date --date='{}' +%s)"
    TIME_OUT="$(echo $JSON | jq --raw-output '.time' | tail -n1 | xargs -I {} date --date='{}' +%s)"
    SOURCE=$(echo $JSON | grep from | cut -f2 -d"'")
    echo "${NODE},${CSV_LINE},${TIME_IN},${TIME_OUT},${SOURCE}" | tee -a ../../db-${ENVIRONMENT}.csv
  done < links.txt
  cd ..
done

Once we have all the results, we load to Google Spreadsheet and look into the results…

Results Analysis

Impact of Memory

Once the allocation is above what is necessary for ffmpeg to transcode a video, memory is a non-impacting variable at the first approximation. However, at the second level we can see a slight increase in performance in the range of 0.5 to 1% between 1 and 4GB allocated.
Nevertheless, this factor was not taken into account.

Inflence RAM
RAM does not impact performance (or only marginally)

Impact of CPU allocation & Pinning

Regardless of the deployment method (AWS or Bare Metal), there is a change in behavior when allocating less or more than 1 CPU “equivalent”.

Being below or above the line

Running CPU allocation under 1 gives the best consistency across the board. The graph shows that the variations are contained, and what we see is an average variation of less than 4% in performance.


Low CPU per pod gives low influence of concurrencyRunning jobs with CPU request Interestingly, the heatmap shows that the worse performance is reached when ( Concurrency * CPU Counts ) ~ 1. I don’t know how to explain that behavior. Ideas?

Heat map CPU lower than 1
If total CPU is about 1 the performance is the worse.

Being above the line

As soon as you allocate more than a CPU, concurrency directly impacts performance. Regardless of the allocation, there is an impact, with concurrency 3.5 leading to about 10 to 15% penalty. Using more workers with less cores will increase the impact, up to 40~50% at high concurrency

As the graphs show, not all concurrencies are made equal. The below graphs show duration function of concurrency for various setups.

aws vs lxd 2
AWS with or without LXD, 2 cores / jobaws vs lxd 4
With 4 Cores
and 5 cores / jobWhen concurrency is low and the performance is well profiled, then slicing hosts thanks to LXD CPU pinning is always a valid strategy.

By default, LXD CPU-pinning in this context will systematically outperform the native scheduling of Docker and Kubernetes. It seems a concurrency of 2.5 per host is the point where Kubernetes allocation becomes more efficient than forcing the spread via LXD.

However, unbounding CPU limits for the jobs will let Kubernetes use everything it can at any point in time, and result in an overall better performance.

When using this last strategy, the performance is the same regardless of the number of cores requested for the jobs. The below graph summarizes all results:

AWS duration function of concurrency
All results: unbounding CPU cores homogenizes performance

Impact of concurrency on individual performance

Concurrency impacts performance. The below table shows the % of performance lost because of concurrency, for various setups.

Performance penalty fct concurrency
Performance is impacted from 10 to 20% when concurrency is 3 or more

Conclusion

In the context of transcoding or another CPU intensive task,

  • If you always allocate less than 1 CPU to your pods, concurrency doesn’t impact CPU-bound performance; Still, be careful about the other aspects. Our use case doesn’t depend on memory or disk IO, yours could.
  • If you know in advance your max concurrency and it is not too high, then adding more workers with LXD and CPU pinning them always gets you better performance than native scheduling via Docker. This has other interesting properties, such as dynamic resizing of workers with no downtime, and very fast provisioning of new workers. Essentially, you get a highly elastic cluster for the same number of physical nodes. Pretty awesome.
  • The winning strategy is always to super provision CPU limits to the max so that every bit of performance is allocated instantly to your pods. Of course, this cannot work in every environment, so be careful when using this, and test if it fits with your use case before applying in production.

These results are in AWS, where there is a hypervisor between the metal and the units. I am waiting for hardware with enough cores to complete the task. If you have hardware you’d like to throw at this, be my guest and I’ll help you run the tests.

Finally and to open up a discussion, a next step could also be to use GPUs to perform this same task. The limitation will be the number of GPUs available in the cluster. I’m waiting for some new nVidia GPUs and Dell hardware, hopefully I’ll be able to put this to the test.

There are some unknowns that I wasn’t able to sort out. I made the result dataset of ~3000 jobs open here, so you can run your own analysis! Let me know if you find anything interesting!


Stéphane Graber: USB hotplug with LXD containers

$
0
0

LXD logo

USB devices in containers

It can be pretty useful to pass USB devices to a container. Be that some measurement equipment in a lab or maybe more commonly, an Android phone or some IoT device that you need to interact with.

Similar to what I wrote recently about GPUs, LXD supports passing USB devices into containers. Again, similarly to the GPU case, what’s actually passed into the container is a Unix character device, in this case, a /dev/bus/usb/ device node.

This restricts USB passthrough to those devices and software which use libusb to interact with them. For devices which use a kernel driver, the module should be installed and loaded on the host, and the resulting character or block device be passed to the container directly.

Note that for this to work, you’ll need LXD 2.5 or higher.

Example (Android debugging)

As an example which quite a lot of people should be able to relate to, lets run a LXD container with the Android debugging tools installed, accessing a USB connected phone.

This would for example allow you to have your app’s build system and CI run inside a container and interact with one or multiple devices connected over USB.

First, plug your phone over USB, make sure it’s unlocked and you have USB debugging enabled:

stgraber@dakara:~$ lsusb
Bus 002 Device 003: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 002: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 021: ID 17ef:6047 Lenovo 
Bus 001 Device 031: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 004: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 005: ID 046d:0a01 Logitech, Inc. USB Headset
Bus 001 Device 033: ID 0fce:51da Sony Ericsson Mobile Communications AB 
Bus 001 Device 003: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 002: ID 072f:90cc Advanced Card Systems, Ltd ACR38 SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Spot your phone in that list, in my case, that’d be the “Sony Ericsson Mobile” entry.

Now let’s create our container:

stgraber@dakara:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

And install the Android debugging client:

stgraber@dakara:~$ lxc exec c1 -- apt install android-tools-adb
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following NEW packages will be installed:
 android-tools-adb
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.2 kB of archives.
After this operation, 198 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 android-tools-adb amd64 5.1.1r36+git20160322-0ubuntu3 [68.2 kB]
Fetched 68.2 kB in 0s (0 B/s) 
Selecting previously unselected package android-tools-adb.
(Reading database ... 25469 files and directories currently installed.)
Preparing to unpack .../android-tools-adb_5.1.1r36+git20160322-0ubuntu3_amd64.deb ...
Unpacking android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...

We can now attempt to list Android devices with:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached

Since we’ve not passed any USB device yet, the empty output is expected.

Now, let’s pass the specific device listed in “lsusb” above:

stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce productid=51da
Device sony added to c1

And try to list devices again:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

To get a shell, you can then use:

stgraber@dakara:~$ lxc exec c1 -- adb shell
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
E5823:/ $

LXD USB devices support hotplug by default. So unplugging the device and plugging it back on the host will have it removed and re-added to the container.

The “productid” property isn’t required, you can set only the “vendorid” so that any device from that vendor will be automatically attached to the container. This can be very convenient when interacting with a number of similar devices or devices which change productid depending on what mode they’re in.

stgraber@dakara:~$ lxc config device remove c1 sony
Device sony removed from c1
stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce
Device sony added to c1
stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

The optional “required” property turns off the hotplug behavior, requiring the device be present for the container to be allowed to start.

More details on USB device properties can be found here.

Conclusion

We are surrounded by a variety of odd USB devices, a good number of which come with possibly dodgy software, requiring a specific version of a specific Linux distribution to work. It’s sometimes hard to accommodate those requirements while keeping a clean and safe environment.

LXD USB device passthrough helps a lot in such cases, so long as the USB device uses a libusb based workflow and doesn’t require a specific kernel driver.

If you want to add a device which does use a kernel driver, locate the /dev node it creates, check if it’s a character or block device and pass that to LXD as a unix-char or unix-block type device.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

The Fridge: Ubuntu Weekly Newsletter Issue 503

$
0
0

Welcome to the Ubuntu Weekly Newsletter. This is issue #503 for the weeks March 13 – 26, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • OerHeks
  • Chris Guiver
  • Darin Miller
  • Alan Pope
  • Valorie Zimmerman
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Sebastian Kügler: Parrotfish

$
0
0

I’ve been trying macro photography and using the depth of field to make the subject of my photos stand out more from the background. This photo of a parrotfish shows promising results beyond “blurry fish butt” quality. I’ll definitely use this technique more often in the future, especially for colorful fish with colorful coral in the background.

#photographyfirstworldproblems

Ubuntu Insights: Machine Learning with Snaps

$
0
0

Late last year Amazon introduce a new EC2 image customized for Machine Learning (ML) workloads. To make things easier for data scientists and researchers, Amazon worked on including a selection of ML libraries into these images so they wouldn’t have to go through the process of downloading and installing them (and often times building them) themselves.

But while this saved work for the researchers, it was no small task for Amazon’s engineers. To keep offering the latest version of these libraries they had to repeat this work every time there was a new release, which was quite often for some of them. Worst of all they didn’t have a ready-made way to update those libraries on instances that were already running!

By this time they’d heard about Snaps and the work we’ve been doing with them in the cloud, so they asked if it might be a solution to their problems. Normally we wouldn’t Snap libraries like this, we would encourage applications to bundle them into their own Snap package. But these libraries had an unusual use-case: the applications that needed them weren’t mean to be distributed. Instead the application would exist to analyze a specific data set for a specific person. So as odd as it may sound, the application developer was the end user here, and the library was the end product, which made it fit into the Snap use case.

To get them started I worked on developing a proof of concept based on MXNet, one of their most used ML libraries. The source code for it is part C++, part Python, and Snapcraft makes working with both together a breeze, even with the extra preparation steps needed by MXNet’s build instructions. My snapcraft.yaml could first compile the core library and then build the Python modules that wrap it, pulling in dependencies from the Ubuntu archives and Pypi as needed.

This was all that was needed to provide a consumable Snap package for MXNet. After installing it you would just need to add the snap’s path to your LD_LIBRARY_PATH and PYTHONPATH environment variables so it would be found, but after that everything Just Worked! For an added convenience I provided a python binary in the snap, wrapped in a script that would set these environment variables automatically, so any external code that needed to use MXNet from the snap could simply be called with /snap/bin/mxnet.python rather than /usr/bin/python (or, rather, just mxnet.python because /snap/bin/ is already in PATH).

I’m now working with upstream MXNet to get them building regular releases of this snap package to make it available to Amazon’s users and anyone else. The Amazon team is also seeking similar snap packages from their other ML libraries. If you are a user or contributor to any of these libraries, and you want to make it easier than ever for people to get the latest and greatest versions of them, let’s get together and make it happen! My MXNet example linked to above should give you a good starting point, and we’re always happy to help you with your snapcraft.yaml in #snapcraft on rocket.ubuntu.com.

If you’re just curious to try it out ourself, you can download my snap and then follow along with the MXNet tutorial, using the above mentioned mxnet.python for your interactive python shell.

Alan Pope: Snapcraft Docs Day

$
0
0

Announcing Snapcraft Docs Day

Snap is a simple archive format for big things.

Snapcraft is a delightful tool for automatically building and publishing software for any Linux system or device. Our documentation and tutorials are great for getting started with snapcraft. We can always improve these though, so this Friday, will be our first Snapcraft Docs Day.

  • When: Friday, 31st March 2017, all day
  • Where: #snapcraft on Rocket Chat
  • Who: Developers & documentation experts of all levels

Why we're doing this

The goal is to ensure our documentation and tutorials are useful and accurate. We’re keen to get people testing our documentation, to make sure it’s clear, understandable and comprehensive. If we’re missing anything, or there are mistakes then file those issues, or better yet, fix them yourself.

If you’ve got something you want to snap, this is also a great day to get started. We’ve personally used these tools all day every day for a couple of years now, but perhaps we’re missing something you need. Now is a great time to test the tools and let us know.

Get involved

If you’re interested in contributing to the projects but don’t know where, here’s a great place to start. Snapcraft and Snapd are both free software, and hosted on GitHub. Snapcraft is written in Python, and Snapd is a Go-based project. The teams behind these projects are super friendly, and keen to help you contribute.

Some examples of things you might want to try:-

Hang out with us in #snapcraft on Rocket Chat during the day. Many of the snapd and snapcraft developers are there to answer your questions and help you. See you there!

Code and bug trackers

Here's a handy reference of the projects mentioned with their repos and bug trackers:-

ProjectSourceIssue Tracker
SnapdSnapd on GitHubSnapd bugs on Launchpad
SnapcraftSnapcraft on GitHubSnapcraft bugs on Launchpad
Snapcraft DocsSnappy-docs on GitHubSnappy-docs issues on GitHub
TutorialsTutorials on GitHubTutorials issues on GitHub

Ubuntu Insights: Battlestar Solution

$
0
0

This is a guest post by Peter Kirwan, technology journalist. If you would like to contribute a post, please contact ubuntu-devices@canonical.com

Forecasts suggest that by 2020, 20bn to 30bn IoT devices will be connected to networks worldwide. Almost certainly, the tactics employed by Colonel Saul Tigh in this clip from episode 201 of Battlestar Galactica will not successfully secure these devices against exploitation in DDoS attacks.

And yet, the notion of mass disconnection seems attractive to some.

In the wake of two huge Mirai-mediated DDoS attacks last year, Mark Warner, a US Senator for Virginia, wrote to the chairman of the Federal Communications Commission, which is responsible for regulating America’s communications networks, with the following query.

“Would it be a reasonable network management practice for ISPs to designate insecure network devices as ‘insecure’ and thereby deny them connections to their networks, including by refraining from assigning devices IP addresses?”

In reply, Wheeler didn’t offer an opinion on whether it would be “reasonable” to disconnect “insecure” devices. He didn’t point out that state-directed disconnection wouldn’t be effective beyond US borders. Neither did he comment on the huge potential invasion of privacy involved, nor on the legal nightmare that would confront telcos cutting off their customers’ access to the network.

What Wheeler did say was that the FCC would be investigating “market failure” within the “device manufacturer community”. He also raised the possibility that the FCC could adjust the way it approves devices for consumer use “to protect networks from IoT device security risks”.

This is how regulation starts: with agencies like the FCC searching for remedies that lie within their grasp.

IoT-mediated DDoS is a big deal. In the long run, as we point out in this recent white paper, device vendors will need to manage vulnerability if they are to escape liability.

At the moment, there’s a lot wrong with the kind of low-end IoT devices that played such a prominent role in Mirai-based DDoS attacks in Europe and North America. The list includes hard-coded passwords, limited or non-existent UIs, inability to update and a lack of encryption and secure key storage.

If you construct an OS for IoT devices with security in mind, the results look very different. Canonical’s Ubuntu Core, for example, offers the following:

  • Reduced footprint, with minimum points of vulnerability
  • A centralised mechanism for software updates
  • Automatic rollback to last known working configuration
  • Read-only, digitally-signed files
  • Sandboxed applications

In the face of complex threats, the notion of pulling the plug, Galactica-style, amounts to little more than rhetoric designed to appease those whose main contribution to debate involves the phrase “something must be done”.

To contain the threats posed by the explosion of IoT connectivity, we need to take action on multiple fronts. As the FCC indicates, the device itself is one of the most important vectors of all. When IoT devices contain software designed with security in mind, the attack surface will shrink significantly. That’s a goal worth shooting for.

Brandon Holtsclaw: The Best Instagram Photos

$
0
0

Digital photography sites like Instagram can be a great mean of leveraging the appeal of your site if the photos included in it are really excellent. However simply installing the pictures within the internet site is not nearly enough, instead there must be a clear planning when it involves designing and also sustaining those images within the digital photography website.

real Instagram photo followers

In this short article, a couple of tips have been pointed out which can assist you design and make the most for the site if it consists of a lot of photography.

Large Photos

Big pictures or oversized images within a site can obtain significant attention from the site visitors. The presence of larger pictures could have a dramatic influence compared to expected. Nonetheless, the images should symbolize exactly what the brands really stand for on Instagram by means of consisting of the required information within those pictures.

Include Sliders

It is good to consist of sliders specifically if you are using multiple pictures to display. One of the vital advantages of slider is its capability to support numerous dimensions which provides sufficient adaptability for the designers considering that the layout elements are not restricted towards a full-screen image. Nonetheless it is not desirable to include greater than multiple photos because it can minimize the degree of influence amongst individuals at Instagram. Simply stick several pictures that have one of the most impact and also make sure the pictures matter with the message that you are aiming to communicate for the prospective clients. One more benefit of sliders is that it could be altered and also changed with various other photos whenever needed.

Collection

Collection is an additional way of enhancing digital photography website among the site visitors. Collage consists of a remarkable group of images. Nevertheless web designers will certainly have to offer a lot more focus in the photo selection, chopping and also grouping of the images because there are circumstances where the photo dimension on the display may not match in the landscape setting or in images that need a lot of buy more instagram followers.

 
If you are carrying out the motif of the website in the digital photography, then you will certainly have to make certain the rest of the design aspects in the site are straightforward. If the photos could speak the theme of your site, after that the value of text ends up being less in the site layout. Consist of buttons as well as colors that are easy in its characteristics which would certainly aid site visitors enjoy the photos with utmost passion.

Lower the use of results

Designers will have to bear in mind the website filling times since the site speed could have a greater influence on the period the website visitors that check out your site. Slower packing times can cause website visitors leaving the website prior to it lots completely. High-resolution photos of bigger file dimension could result in slower loading of the internet sites as a result of the special impacts applied to the photos. It is desirable to prevent such special results within the photos if it influences the web site packing rate. However if the results are unavoidable, then make certain that you take advantage of basic impacts in the web site.

Image Style

A few of the digital photography websites usually take advantage of enjoyable ” motifs ‘ for photos which helps in attracting visitor” s interest to the images. It could be a black and white gallery or an Instagram-style appearance which are visually excellent at the exact same time interesting for the individuals. Nevertheless, real Instagram followers will need to make certain the themes to be consisted of in a manner that praises the digital photography as well as does not conceal it.

Shapes and size

Because pictures mainly differ in the shapes and sizes, it readies to include a mix of the varying shapes and sizes of photos for offering an aesthetic treat for the site visitors. It is also desirable to offer even more focus on the photo compositions and framing and area it carefully or in a remote way taking into consideration the colour as well as history of the site.

Accurate text placement

Make sure one of the most vital parts of the photos do not consist of text or switches since it could reduce the beauty of the digital photography in the web site. Comparison is also a vital facet in the web design of an Instagram photo that is based around images. The message as well as backgrounds ought to stand out from the digital photography. Some of the recommended web site backgrounds are white or black backgrounds. You can buy real Instagram followers and give area for the pictures using sufficient white spaces and ensure the photos to have enough comparison and also context that differentiates it from the surrounding photos.

Creating the pictures

Designing the images is an essential element while designing a digital photography based internet site. For doing this, the pictures should be chosen prior to designing the structure for sustaining the content.

Conclusion

Photography internet sites are still considered to be among the best in the internet due to its capacity to properly engage customers with the site. The above discussed ideas are created to develop a straightforward digital photography site just like Instagram.


Brandon Holtsclaw: Get Real Followers On Instagram

$
0
0

Social media has great impact on search engine optimization and marketing of your business website. According to reliable 2012 Social Media Marketing Industry Report, nearly 85 percent of business enterprises accepted that social media marketing has played an important role in their business exposure and growth and 69 percent revealed that it enhanced their online traffic. Today, most of the businesses are precisely working on their social media marketing, which has become very easy, effective and affordable. You can easily get cheap Instagram followers, likes and comments.

buy real active Instagram followers

Marking a remarkable online presence is important aspect of internet marketing strategy and Instagram is the most popular social networking platform. It has decent market share of 100 million monthly active users where 40 millions photos are uploaded per day that receive 8500 likes and 1000 comments per second. With these figures, one can’t deny that fact that social networking services of Instagram can change the success graphics of your business. Instagram is all about photo sharing.

You simply need to click pictures of your products or services with your smartphone and upload them on your Instagram web profile. These pictures will get you thousands of real Instagram followers, comments, shares and lots of exposure to targeted customers. Instagram is the best way for brands to communicate with their customers and approach new potential one. The uploaded photos should be attractive and concerned to your existing or new products. This leads to brand awareness and make customers to purchase or avail your products or services respectively.

Although, photo sharing is just not enough to get 100 percent out of it, one needs to use hashtags to promote them. These hashtages enhance the quality of your followers and give more exposure to your brand. You can take advantages of keywords altogether by adding them with photos’ name, captions etc. This makes your images more searchable and shows them in top search engine results. The Instagram image optimization can be an ideal way to get fair organic listing.

One can use Instagram in most efficient way by following few simple steps that include;

Contests 

This the best and most popular way to use this platform where you can ask customers to click pictures of themselves using you products or services and then, upload them to win exciting prizes. You can allow them to vote on their favorite photo of product for enhancing interaction.

Event Marketing 

You can ask people to get picture of an event organized by you. This will excite them and give them a good interaction platform.

Social Media 

By sharing Instagram photos of brand, products and services on other social media websites (like Facebook, Twitter etc.), one can promote business.

Connection and Location 

Instagram is a great platform to establish new connection with customers and interact with them. You are allowed to add location to your photos that promote your business more effectively at local level as well.

Now, you can buy followers on Instagram with USA leading social media marketing business and consulting firm, buyrealsocialmarketing.com that offers comprehensive Instagram packages at very affordable prices. They provide you guaranteed results by getting maximum of Instagram followers, comments and likes.There solutions are 100 percent secure and safe to get most out of Instagram and grow your business.

Process To Buy Real Instagram Followers

Building the site is the first step to be done in the process. There are many social networking where you can choose the site and the database can also be taken from them. As the business is built the online presence should be continued and this can lead you to the top. Today social media is necessary for types of business and it has become the major source for marketing. Mainly for people who are doing business online the social media has become necessarily for marketing their products. The best way to increase the popularity in the site is to buy instagram followers. The more the likes are the best the product the sales of the product will be. The customers feel that the products that have the maximum likes are the beast rated products and they will go with them. As people buy active instagram followers they start trusting you.
You need to check whether the people who will buy instagram likes are having quality with them. When you buy real active Instagram followers you can add them gradually. This will make you comfortable when you are in need of likes. There may be some situation where you need the likes but cannot buy them so at that time you can add the likes which you have already bought earlier. The people will buy instagram likes and keep them store and use them whenever it is necessary for them. The credibility should be maintained for the likes and not add all the likes at a time. People should buy instagram likes according to the necessity and they should not buy instagram likes just for fun. The likes should be added slowly and keep some of them in spare so that they can be used in emergency.
To promote your business in these days social media helps you a lot. There are sites in which the sales of the product will get increased just by buying some of the followers and likes for the product. The customers will have wide choice of products and they can chose in whichever site they want to. Most of the customers will look for the products that are having maximum number of likes and followers and this is why the people will buy instagram likes.

 

Simon Raffeiner: Free Software Foundation okay with new GitHub terms, but recommends other services

$
0
0
In a statement issued on March 14, 2017, the Free Software Foundation declares that the new GitHub Terms of Service don't conflict with copyleft, but still recommends to use other hosting sites.

Ubuntu Podcast from the UK LoCo: S10E02 – Wiry Labored Sense - Ubuntu Podcast

$
0
0

It’s Season Ten Episode Two of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

The same line up as last week are here again for another episode.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Daniel Pocock: Brexit: If it looks like racism, if it smells like racism and if it feels like racism, who else but a politician could argue it isn't?

$
0
0

Since the EU referendum got under way in the UK, it has become almost an everyday occurence to turn on the TV and hear some politician explaining "I don't mean to sound racist, but..." (example)

Of course, if you didn't mean to sound racist, you wouldn't sound racist in the first place, now would you?

The reality is, whether you like politics or not, political leaders have a significant impact on society and the massive rise in UK hate crimes, including deaths of Polish workers, is a direct reflection of the leadership (or profound lack of it) coming down from Westminster. Maybe you don't mean to sound racist, but if this is the impact your words are having, maybe it's time to shut up?

Choosing your referendum

Why choose to have a referendum on immigration issues and not on any number of other significant topics? Why not have a referendum on nuking Mr Putin to punish him for what looks like an act of terrorism against the Malaysian Airlines flight MH17? Why not have a referendum on cutting taxes or raising speed limits, turning British motorways into freeways or an autobahn? Why choose to keep those issues in the hands of the Government, but invite the man-in-a-white-van from middle England to regurgitate Nigel Farage's fears and anxieties about migrants onto a ballot paper?

Even if David Cameron sincerely hoped and believed that the referendum would turn out otherwise, surely he must have contemplated that he was playing Russian Roulette with the future of millions of innocent people?

Let's start at the top

For those who are fortunate enough to live in parts of the world where the press provides little exposure to the antics of British royalty, an interesting fact you may have missed is that the Queen's husband, Prince Philip, Duke of Edinburgh is actually a foreigner. He was born in Greece and has Danish and German ancestry. Migration (in both directions) is right at the heart of the UK's identity.

Queen and Prince Philip

Home office minister Amber Rudd recently suggested British firms should publish details about how many foreign people they employ and in which positions. She argued this is necessary to help boost funding for training local people.

If that is such a brilliant idea, why hasn't it worked for the Premier League? It is a matter of public knowledge how many foreigners play football in England's most prestigious division, so why hasn't this caused local clubs to boost training budgets for local recruits? After all, when you consider that England hasn't won a World Cup since 1966, what have they got to lose?

Kevin Pietersen

All this racism, it's just not cricket. Or is it? One of the most remarkable cricketers to play for England in recent times, Kevin Pietersen, dubbed "the most complete batsman in cricket" by The Times and "England's greatest modern batsman" by the Guardian, was born in South Africa. In the five years he was contracted to the Hampshire county team, he only played one match, because he was too busy representing England abroad. His highest position was nothing less than becoming England's team captain.

Are the British superior to every other European citizen?

One of the implications of the rhetoric coming out of London these days is that the British are superior to their neighbours, entitled to have their cake and eat it too, making foreigners queue up at Paris' Gare du Nord to board the Eurostar while British travelers should be able to walk or drive into European countries unchallenged.

This superiority complex is not uniquely British, you can observe similar delusions are rampant in many of the places where I've lived, including Australia, Switzerland and France. America's Donald Trump has taken this style of politics to a new level.

Look in the mirror Theresa May: after British 10-year old schoolboys Robert Thompson and Jon Venables abducted, tortured, murdered and mutilated 2 year old James Bulger in 1993, why not have all British schoolchildren fingerprinted and added to the police DNA database? Why should "security" only apply based on the country where people are born, their religion or skin colour?

Jon Venables and Robert Thompson

In fact, after Brexit, people like Venables and Thompson will remain in Britain while a Dutch woman, educated at Cambridge and with two British children will not. If that isn't racism, what is?

Running foreigner's off the roads

Theresa May has only been Prime Minister for less than a year but she has a history of bullying and abusing foreigners in her previous role in the Home Office. One example of this was a policy of removing driving licenses from foreigners, which has caused administrative chaos and even taken away the licenses of many people who technically should not have been subject to these regulations anyway.

Shouldn't the DVLA (Britain's office for driving licenses) simply focus on the competence of somebody to drive a vehicle? Bringing all these other factors into licensing creates a hostile environment full of mistakes and inconvenience at best and opportunities for low-level officials to engage in arbitrary acts of racism and discrimination.

Of course, when you are taking your country on the road to nowhere, who needs a driving license anyway?

Run off the road

What does "maximum control" over other human beings mean to you?

The new British PM has said she wants "maximum control" over immigrants. What exactly does "maximum control" mean? Donald Trump appears to be promising "maximum control" over Muslims, Hitler sought "maximum control" over the Jews, hasn't the whole point of the EU been to avoid similar situations from ever arising again?

This talk of "maximum control" in British politics has grown like a weed out of the UKIP. One of their senior figures has been linked to kidnappings and extortion, which reveals a lot about the character of the people who want to devise and administer these policies. Similar people in Australia aspire to jobs in the immigration department where they can extort money out of people for getting them pushed up the queue. It is no surprise that the first member of Australia's parliament ever sent to jail was put there for obtaining bribes and sexual favours from immigrants. When Nigel Farage talks about copying the Australian immigration system, he is talking about creating jobs like these for his mates.

Even if "maximum control" is important, who really believes that a bunch of bullies in Westminster should have the power to exercise that control? Is May saying that British bosses are no longer competent to make their own decisions about who to employ or that British citizens are not reliable enough to make their own decisions about who they marry and they need a helping hand from paper-pushers in the immigration department?

maximum control over Jewish people

Echoes of the Third Reich

Most people associate acts of mass murder with the Germans who lived in the time of Adolf Hitler. These are the stories told over and and over again in movies, books and the press.

Look more closely, however, and it appears that the vast majority of Germans were not in immediate contact with the gas chambers. Even Gobels' secretary writes that she was completely oblivious to it all. Many people were simply small cogs in a big bad machine. The clues were there, but many of them couldn't see the big picture. Even if they did get a whiff of it, many chose not to ask questions, to carry on with their comfortable lives.

Today, with mass media and the Internet, it is a lot easier for people to discover the truth if they look, but many are still reluctant to do so.

Consider, for example, the fingerprint scanners installed in British post offices and police stations to fingerprint foreigners and criminals (as if they have something in common). If all the post office staff refused to engage in racist conduct the fingerprint scanners would be put out of service. Nonetheless, these people carry on, just doing their job, just following orders. It was through many small abuses like this, rather than mass murder on every street corner, that Hitler motivated an entire nation to serve his evil purposes.

Technology like this is introduced in small steps: first it was used for serious criminals, then anybody accused of a crime, then people from Africa and next it appears they will try and apply it to all EU citizens remaining in the UK.

How will a British man married to a French woman explain to their children that mummy has to be fingerprinted by the border guard each time they return from vacation?

The Nazis pioneered biometric technology with the tracking numbers branded onto Jews. While today's technology is electronic and digital, isn't it performing the same function?

There is no middle ground between "soft" and "hard" brexit

An important point for British citizens and foreigners in the UK to consider today is that there is no compromise between a "soft" Brexit and a "hard" Brexit. It is one or the other. Anything less (for example, a deal that is "better" for British companies and worse for EU citizens) would imply that the British are a superior species and it is impossible to imagine the EU putting their stamp on such a deal. Anybody from the EU who is trying to make a life in the UK now is playing a game of Russian Roulette - sure, everything might be fine if it morphs into "soft" Brexit, but if Theresa May has her way, at some point in your life, maybe 20 years down the track, you could be rounded up by the gestapo and thrown behind bars for a parking violation. There has already been a five-fold increase in the detention of EU citizens in British concentration camps and they are using grandmothers from Asian countries to refine their tactics for the efficient removal of EU citizens. One can only wonder what type of monsters Theresa May has been employing to run such inhumane operations.

This is not politics

Edmund Burke's quote "The only thing necessary for the triumph of evil is for good men to do nothing" comes to mind on a day like today. Too many people think it is just politics and they can go on with their lives and ignore it. Barely half the British population voted in the referendum. This is about human beings treating each other with dignity and respect. Anything less is abhorrent and may well come back to bite.

Ubuntu Insights: Making snap packages of photogrammetry software

$
0
0

This is a guest post by Alberto Mardegan, Software Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Some time ago I got vaguely interested into photogrammetry, that is the reconstruction of a 3D model out of a set of plain 2D photographs. I just thought that it was cool, and wanted to try it.

Unfortunately, the most popular of these tools, VisualSFM, was not packaged for Ubuntu and didn’t come with ready binaries. Furthermore, the steps to build it are far from trivial: they include modifying a few of the source files!

So, while I was going through this ordeal in order to compile it, I thought of how I could avoid running through all this pain once more and the need emerged to build this program again in the future. I initially thought of writing a shell script to automate it, but then I realized that there exists a much better solution: a snapcraft recipe! This solution has the big advantage that the resulting binary (called a “snap” package) can be shared with other Linux users, by publishing it into the snap store. Therefore, one doesn’t need to be a programmer or a computer expert anymore in order to install the software.

As I quickly found out, other “structure from motion” and “multi-view stereo” (the two parts of the 3D reconstruction pipeline) programs are also unavailable as binaries for Linux, and require quite some effort to be built. As a matter of fact, this problem is quite common for scientific and academic software: always written by authentic geniuses in the field of research, but who often are not as experienced (or interested) in software distribution.

So I thought — well, given that I’ve just made a snap package (and that I’ve even enjoyed the process!), why stop here? 🙂

And here you have it: most of this photogrammetry software is now available as snap packages, which makes it trivial to install them and try them out. Though indeed, the 3d reconstruction can take a lot of time, so that’s another thing to be considered.

To help you out in deciding which software to use, I made a video review of structure from motion and multi-view stereo tools; without any pretence – just the goal of giving an overview of what is available out there, and how easy (or difficult!) to use it is:

I also “snapped” a couple of other programs related to 3D reconstructions. One of them is CloudCompare, a 3D point cloud and mesh processing software.

In the future I may make more videos on this subject – stay tuned!

Original post here

Viewing all 17727 articles
Browse latest View live


Latest Images