Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

Ubucon Europe 2019: 2nd Batch of Calls Approved

$
0
0

Another 2 weeks have past and you guys have submitted great content. We are very excited about what’s being prepared, but we got even more curious after reading your proposals.

We are happy to announce we have approved the second batch of calls for you. To know more of what to expect, visit our short summary in:

https://manage.ubucon.org/eu2019/sneak/

More talks will be approved soon.

Do you have something to share? Hurry up, you still got time!

Submit yours now at:

https://manage.ubucon.org/eu2019/cfp


Josh Powers: cloud-init 19.2 Released

$
0
0
As announced, cloud-init 19.2 was released last Wednesday! From the announcement, some highlights include: FreeBSD enhancements Added NoCloud datasource support. Added growfs support for rootfs. Updated tools/build-on-freebsd for python3 Arch distro added netplan rendering support. cloud-init analyze reporting on boot events. And of course numerous bug fixes and other enhancements. Version 19.1 is already available in Ubuntu Eoan. A stable release updates (SRU) to Ubuntu 18.

Canonical Design Team: Ubuntu助力英国电信集团开启下一代5G云核心

$
0
0

7月24日,Canonical宣布BT(英国电信集团)将使用Ubuntu上的Charmed OpenStack作为其下一代5G核心云的一个关键组件。Canonical(Ubuntu背后的公司)将提供开源虚拟化基础设施管理器作为BT网络功能虚拟化(NFV)方案的一部分,并逐步迁移到基于云的核心网络。

这种基于云开源的方法将确保BT能够快速部署新服务,增加容量且充分满足由5G和光纤到户产生的客户需求。Canonical的OpenStack架构也将使BT完整的5G核心网络交付变得简单。

OpenStack云软件将实现网络硬件和软件的分离,将核心网络组件转变为软件应用程序,这意味着可以通过持续集成和开发更快地更新迭代。这种分离允许不同网络应用可以跨数据中心共享同一个硬件,使得有附加容量需求时网络更具弹性和扩展性。与更换核心网络设备相比,软件更新的速度将为开发5G服务开辟新的工作方式,BT可以在几周内建立新服务并在几天内部署。

BT首席架构师Neil J. McRae说道:“Canonical正在为我们提供“原生云”(cloud-native)基础,使我们能够创建一个智能且完全融合的网络。利用开源和最佳技术将确保我们能够实现融合愿景,并为我们的客户提供世界领先的5G和FTTP体验。” 

Canonical CEO Mark Shuttleworth说道:“BT已经认识到开放式架构所提供的效率,灵活性和创新,同时,意识到这个方式对于5G服务交付的价值。我们很高兴与他们合作,为使用开放架构奠定基础,从而为BT 5G战略打下坚实的基础。”

BT的EE移动网络已在2019年5月30日在六个城市开启了5G服务。在伦敦,伯明翰,加的夫,曼彻斯特,爱丁堡和贝尔法斯特的客户和企业是英国第一批体验5G优势的用户。 英国电信还概述了其5G路线图,该路线图将从2022年开始推出基于云的完整5G核心网。

高带宽和低延迟,再加上扩展和不断增长的5G覆盖范围,将提供响应更快的网络,实现真正身临其境的移动增强现实,实时健康监控和移动云游戏。完整的5G 核心网也是BT融合网络技术的重要一步,将光纤、移动网络和WiFi融为一体,实现无缝的客户体验。

借助基于云的架构,未来的发展将是超可靠低延迟通信 (URLLC)、超高速分布式网络可以更加灵活地推出。5G的这一阶段将对实现如自动驾驶车队的实时交通管理,拥有上百万台设备监测全国空气质量的大型传感器网络,以及远程实时交互的“感知互联网” 起到关键作用。

The post Ubuntu助力英国电信集团开启下一代5G云核心 appeared first on Ubuntu Blog.

Canonical Design Team: Mir support for Wayland

$
0
0

What is Mir, what is Wayland, do I care?

Shells for graphical interfaces come in many forms, from digital signage and kiosks that just show a single full screen application; to desktop environments that manage multiple applications, multiple screens and multiple workspaces. Traditionally, shells are built from a number of closely coupled components: display servers, compositors, window managers and panels.

Mir is a library for developing shells that makes it easier to share common functionality between them while preserving the freedom to be different where it matters. Mir provides window management defaults, but does not impose a particular window management style. It provides some default graphics hardware/driver stacks, but others can be (and are) used. It is possible to customize the compositing, etc.

Wayland is a protocol for communication between applications and shells and there are de-facto APIs, libraries and other tools for working with this protocol. There is a very narrow core to this protocol (basically just IPC and a mechanism for adding extensions) and to do anything “real” needs Wayland extension protocols. Mir provides the core and the standard extensions by default, and provides ways to enable and/or implement additional Wayland extension protocols.

If you create a shell with Mir you do not need to develop anything it has in common with other shells: window management, support for various hardware, compositing, Wayland. You can concentrate on the features that make it unique.

How is Mir used?

Like any other library, Mir is used by linking it into an application. There are existing applications using Mir…

appliances with a single,  full-screen applicationFor appliances with a single, full-screen application we provide the “mir-kiosk” snap
the Ubuntu Touch phone operating system and the Unity8 desktopUBports use Mir for the Ubuntu Touch phone operating system and the Unity8 desktop environment
Wayland in the MATE desktopThere is work-in-progress supporting Wayland in the MATE desktop
egmde running on Ubuntu CoreTo help developers get started there is an example desktop, egmde, with tutorials showing the stages of development and available as both classic and confined snaps

Customizing Mir based shells

There are two basic ways to customize Mir: you can supply configuration options when the program is run, or you can write code that works with the Mir library.

Mir is structured so that the window management, compositing logic, graphics stacks and communications protocols can be configured independently. You can select a window management style (we provide floating, kiosk, and tiling window management examples) independently of other options such as the Wayland extension protocols supported.

In this article we’re going to focus on the options for configuring Wayland support.

Customizing Wayland extensions

The built in extensions

Mir comes with a number of Wayland extensions “built in”, the “recommended for every use” ones are enabled by default, others that are disabled by default and must be enabled by configuration or code.

protocoldefault
wl_shellenabled
xdg_wm_baseenabled
zxdg_shell_v6enabled
zwlr_layer_shell_v1disabled
zxdg_output_manager_v1disabled

The simplest way to configure these extensions is to take an existing Mir shell and specify the extensions you want:

miral-shell --wayland-extensions wl_shell:zxdg_shell_v6:zxdg_output_manager_v1

Depending on the way you are deploying your shell, the extensions can be specified on the command line, as an environment variable, or in a configuration file.

When you write your own shell you can enable and/or disable extensions in the code. You can also write a “filter” so that you can control the extensions available to particular applications. (For example, you probably only want to offer zwlr_layer_shell_v1 to shell components.)

Custom extensions

When writing shells it is common to have client “applications” that need access to features that are provided by the standard extensions. Unity8, for example, uses a richer set of window types so it can handle them well across both phones and desktops. This is where the ability to add Wayland extension protocols is useful: the client and server can exchange information not available by other means.  

In addition to the built in extensions, shells can add extensions of their own. There are two reasons to do this when writing a shell:

  • your Wayland extension would be useful for many shells but is not yet ready to upstream to Mir; and,
  • your Wayland extension is specific to your shell and of no use elsewhere.

Once you have added your Wayland extension to your shell it can be controlled in the same ways as the built in extensions.

Implementing and testing Wayland extensions

When you implement a custom Wayland extension there are three steps to the process:

  • First, you generate “wrapper” classes that represent the protocol in Mir;
  • Then, you code and test the protocol logic for these classes;
  • Finally, you register and configure the protocol in your shell.

Use the development tools the Mir team provides to help with this:

  • Use the headers and code generator for the wrappers from the “libmirwayland-dev” package;
  • Test the implementation of your Wayland protocols with the Wayland Conformance Test Suite“wlcs”; and,
  • Register and configure your protocol using the Mir library “libmiral-dev”.

For more details on this development process follow the “worked example” which covers implementing the “primary_selection” protocol in the example egmde shell.

Summary

You might have many reasons to start building a graphical shell environment: you might want to customize the mir-kiosk experience, you might need a more desktop-like experience where multiple applications run on an appliance or you might want to “scratch an itch” and build something better that everything that came before.

Wayland provides an extensible way for shell components to communicate with each other, if there is no existing protocol that meets your needs then writing one that is fit for your purpose is well supported.

Whatever the reason you have for developing a Wayland based shell, Mir provides both a great foundation and the tools to build on it.

The post Mir support for Wayland appeared first on Ubuntu Blog.

Daniel Pocock: Giving people credit for their work

$
0
0

One of the essential steps that we take when packaging software for major Linux distributions is finding the names of the copyright holders and ensuring that these are preserved in the final package of the software.

Debian even took this a step further by requiring developers to spend extra effort extracting the names from source code and copying them into machine readable debian/copyright files that are convenient for large corporate users. Some have come to see this as an extra barrier to new contributors and new packages.

While the administrative effort of processing this information is important, how seriously do we take the principle and philosophy of it?

Giving people credit for their work doesn't only support the person who made the contribution, it also shows everybody else that their contributions will be treated with respect. The opposite also holds true: if leaders fail to give credit for somebody's contribution or they give credit to the wrong person it can be demotivating for everybody else.

Non-coding contributions

The same principle holds whether it is for source code or other contributions, like investigating a bug, mentoring or doing administrative work.

Many people have been puzzled by the email from former Debian Project Leader (DPL) Chris Lamb where he fails to acknowledge the work I contributed as admin and mentor in GSoC over many years. Furthermore, reading emails like that, you might come to the conclusion that other people, including Molly de Blanc, who it is alleged Lamb was secretly dating, did the work in GSoC 2018. Yet people who participated in the program didn't feel that is accurate. Why has Lamb failed to recognize or thank me for my own contributions?

At first, the problems in Debian's GSoC team were puzzling for many of us. The allegation that Molly de Blanc was Lamb's girlfriend shines a new light on Lamb's email. Neither of them declared their relationship to other members of the GSoC team, it was a complete shock for me when I heard about it.

Please note I don't wish to encourage anybody to vilify either of these people for a relationship: the issue at hand is the effect of the relationship on the way they performed their roles and the impact on other volunteers.

The Mollamby affair, even if it is over, leaves many people wondering if their work will be recognised fairly.

This type of thing is not without precedent. Consider Elena Ceaușescu, the wife of former Romanian dictator Nicolae Ceaușescu. 17 December, the day that de Blanc's Anti-Harassment team ordered the Debian Account Managers to terminate a developer without any hearing or due process, was the anniversary of the day the Ceaușescu's ordered the military to shoot their own citizens.

It is alleged that Elena Ceaușescu obtained her Ph.D and pursued her career in chemistry by having her name put on other people's work. They even pressured the British Royal Institute of Chemistry to make her a fellow in exchange for business contracts. Is there any organization money can't corrupt?

As Stalin famously put it:

power alone is not enough, you need to gain prestige

The Ceaușescu's got this point: they were the first communist dictators to get themselves invited on a state visit to the UK.

The feeling that the DPL's girlfriend takes credit from somebody like myself who mentored three interns, spent a week visiting the interns in Kosovo as well as doing an admin role is really quite demotivating.

For any organization to be healthy, it is essential for the leader to be setting a good example of giving credit to the people who deserve it. How can we expect new package maintainers to put effort into the tedious process of writing machine-readable copyright files if the leader himself can't give credit to somebody who made a major contribution to a program like GSoC over six years? How will other developers in the wider community feel now that these allegations about the former DPL and his girlfriend have emerged?

Some people would rightly point out that the regime of Nicolae and Elena Ceaușescu was one of the most barbaric in Eastern Europe and that as well as benefiting from nepotism to advance her chemistry career, Elena Ceaușescu had been involved in propaganda and torture. Despite de Blanc benefitting from the work of others and quickly gaining titles for herself, it wouldn't be fair to compare Mollamby to the Ceaușescu regime if phenomena like torture were absent from Debian. We'll get to that.

Building a smear

Sadly, the new DPL Sam Hartman has continued in the same vein as his predecessor. Hartman started a thread on debian-project suggesting somebody adopt the RTC services I introduced to Debian. He didn't contact me before doing that. Hartman doesn't give me any recognition for the effort I put in to get these things up and running in the first place: he doesn't mention my name at all. It is as if he wants to make me vanish like Arjen Kamphuis.

In fact, Kamphuis and I had collaborated on a number of activities in the Balkans before his disappearance, here is a photo from the Tirana cryptoparty:

Most organizations would provide volunteers with some empathy and support at the time that one of their collaborators or associates disappears like that: Debian may be the first to take inspiration from such a disappearance.

Personally, I've been responsible for a sizeable amount of the upstream development of those services (reSIProcate and JSCommunicator WebRTC projects), release management, packaging (both Debian/Ubuntu packages and Fedora packages), documentation and managing the successful deployment to production in Debian's rtc.debian.org service. I did the same to create the FedRTC.org service for Fedora users, running the same stack as Debian but linked to Fedora SSO.

While I feel I was responsible for a lot of the initiative to bootstrap these services, I wouldn't be consistent with the message of this blog if I didn't also emphasize that deploying and running services like that in production requires effort from a lot of other people too. For example, members of the Debian Systeam Administrator (DSA) team who helped integrate all the services with Debian's user data (Luca Filipozzi helping at the most critical stages) and the Prosody project lead, Matthew Wild and package maintainers like Victor Seva who do most of the work on the XMPP side of rtc.debian.org. Not only have all these people made important contributions, but it also has to be acknowledged that the successful deployment of the services in a large organization like Debian is also a credit to how well everybody worked together as a team.

For two successive leaders of the Debian project to fail to recognize sizeable contributions like these is incredibly toxic. It is inconsistent with the basic principles we follow crediting people for their code in our packages and therefore it is unfitting for the leader to behave this way. If a Debian Developer removed the main contributor from a debian/copyright file, they would attract strong criticism. Why does the leader get away with such behaviour?

Is there torture in Debian?

There have been a number of significant issues in my private and family life that impacted my contributions to Debian over the last couple of years.

In January 2018, I wrote to Molly de Blanc, as a fellow member of the Outreach team, advising I couldn't fully commit to the GSoC admin role but would help out temporarily to get Debian GSoC 2018 up and running. July 2018, I sent another private message to de Blanc, Lamb (as DPL) and Stephanie Taylor (Google) informing them that extraordinary personal circumstances had limited my role. In August 2018 I thanked the rest of the team and advised I wouldn't participate in 2019. No volunteer should be obliged to give any more details than that.

Yet people weren't happy with that. People sent me threats and insults and they saw it as an opportunity to start gossip. Lamb has tried to evade responsibility with the excuse that gossip sent in private emails and not a public mailing list is acceptable. Is that the attitude of a mature leader or does it sound more like a fudge constructed by a mischievous child?

There have been four significant and tragic things going on in my personal life in a short space of time. By forcing people to question me, Lamb forces me to recall and explain to people some of those other circumstances. This ruthlessly degrading aggression from Lamb compromises the privacy of other family members who may not wish to be named or discussed. It is irrelevant whether the gossip messages fit Lamb's childish definition of public or not.


One of those tragedies was the death of my father.


At a time of pain and grief, is it appropriate for the leader of Debian to put such immense pressure on a volunteer as suggesting they explain all that and put their membership up to a vote of the whole Debian community?

Is it even human for him to dismiss such significant loss as minutiae?

But he did those things, the hideous messages relayed to me through a puppet, Enrico Zini.

I found nothing positive or constructive in those messages. Their only purpose appears to be the pursuit of some sadistic pleasure, believing shame would prevent me from calling out Lamb's bullying.

Is it acceptable for Joerg Jaspert to denounce multiple volunteers in front of a journalist? Yet he did that too.

Having grown up in a country famous for cold-blooded reptiles, I'm surprised to hear such a callous attitude from another human being.

Who is harassing who?

After I informed Lamb about my circumstances in July 2018 and completely resigned from the GSoC team, Lamb's decision to seek political mileage from that and sustain a state of hostility with threats and constant gossip may be one of the most brazen examples of harassment ever seen in a free software community. But if the leader's girlfriend is an Anti-harassment team insider, maybe he can get away with harassing volunteers.

When I describe to other people what has been happening in Debian recently, it brings unreserved scorn. People have compared the tightly coordinated process of threats and defamation to the work of gangsters and the mafia.

One reader of my blog has commented that it looks like the workings of Scientology, who also use excommunication and demotions to maintain control through fear. Her brother had joined that organization some years ago and disconnected her, which is scientology-speak for pretending she doesn't exist. Sam Hartman's recent emails asking somebody to take over my work are eerily similar. Together, the Anti-harassment team and Debian Account Managers have parallels with Scientology's notorious Sea Organization. For Lamb to impose that type of psychological violence on a volunteer in a time of grief is horrendous.

Even GSoC interns have noticed that another Debian Developer's recent apology email to the debian-project email list looked like a forced confession. An observant reader compared it to Mao's notorious thought reform programs.

Yet if he can be broken down in just three months of gangster-like blackmail, threats, defamation and humiliation, take a moment to imagine what it is like being subject to Mollamby's mind control games for over a year while also dealing with significant personal tragedy and grief.

If you have enough empathy to understand, you may well realize why I feel completely comfortable comparing Chris Lamb and Molly de Blanc to people like Nicolae and Elena Ceaușescu. If not, go and read this article on methods of psychological torture and see how many you can relate to the practices people have complained about in Debian recently. The repetitive assertions that I'm not a real Debian Developer, for example, are a form of gaslighting, number 12 in that torture-master's recipe book.

Here is what Psychology Today says about the practice:

Gaslighting is a tactic in which a person or entity, in order to gain more power, makes a victim question their reality. It works much better than you may think. Anyone is susceptible to gaslighting, and it is a common technique of abusers, dictators, narcissists, and cult leaders. It is done slowly, so the victim doesn't realize how much they've been brainwashed.

The way I feel reading Hartman's email and the subsequent thread is much like what an artist or painter would feel if somebody broke into their studio and set their work on fire. Multiple people doing that deliberately during a time of pain and grief is far worse than the act of a lone vandal.

Have you witnessed abuse in free software communities?

My advice to anybody else witnessing or experiencing abuse like this is simple: speak up, don't bottle it up or leave it to somebody else. Don't assume the leaders or some distant community safety team will handle it, as we've seen in Debian, those teams have their own agendas. Speak to people face to face, starting with those you trust most.

Canonical Design Team: Getting Started with Knative on Ubuntu

$
0
0

What is Knative?

Knative is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads. There are three key features in Knative that help deliver its serverless mission:

  • Build– Provides easy-to-use, simple source-to-container builds. You benefit by leveraging standard build mechanisms and constructs.
  • Serving– Knative takes care of the details of networking, autoscaling (even to zero), and revision tracking. You just have to focus on your core logic.
  • Eventing– Universal subscription, delivery, and management of events. You can build modern apps by attaching containers to a data stream with declarative event connectivity and a developer-friendly object model.

What is Serverless?

Serverless computing is a style of computing that simplifies software development by separating code development from code packaging and deployment. You can think of serverless computing as synonymous with function as a service (FaaS). 

Serverless has at least three parts, and consequently can mean something different depending on your persona and which part you look at – the infrastructure used to run your code, the framework and tools (middleware) that hide the infrastructure, and your code which might be coupled with the middleware. In practice, serverless computing can provide a quicker, easier path to building microservices. It will handle the complex scaling, monitoring, and availability aspects of cloud native computing.

Why does serverless matter?

In a few words, productivity and innovation velocity. Serverless can help your developers and operations teams become more productive. More productive engineers are happier and innovate faster. How does it do this? How does serverless help improve productivity?

Before I oversell it I should mention that there can be difficulties. As with any new paradigm, some aspects of computing are made simpler, and other aspects are more difficult. What’s simpler? The act of building and deploying software. What’s more difficult? Well, you will now have a lot more functions to monitor and manage. Ensure the benefits exceed the disadvantages before committing to a large scale project.

How does it work?

Knative is Kubernetes-based, which means it leverages extensible features in Kubernetes. Custom resources (CRs) is a common way to extend Kubernetes – they allow you to define new resources in Kubernetes and register operators to listen to events related to those resources. 

In addition to defining their own resources, Knative leverages Istio to front several of its features, primarily serving and eventing. We’ll explore the architecture in more detail in a future post. For a quick look at the documentation, look here for serving, and here for eventing.

Getting Started with Knative on Ubuntu with MicroK8s

Regardless of the desktop, or server, operating system you use, my preference is to use multipass to quickly setup an Ubuntu VM. There are a few reasons for using an Ubuntu VM:

  1. These instructions have a greater chance of working 🙂
  2. All installed software is confined to the VM
  3. Which means you can easily delete everything by deleting the VM, or you can stop the VM which will stop all associated running processes
  4. These instructions will work in any VM – on your laptop or in the cloud
  5. You can constrain the resources consumed. This is important – if you turn on knative monitoring, a lot more memory and cpu is consumed, which could overload your system if you don’t specify constraints

0. [optional] Create a VM

Use multipass to create an Ubuntu VM. This works on Windows, Mac, and Linux once multipass is installed. Alternatively, launch an ubuntu VM in your environment of choice, such as AWS, GCP, Azure, VMware, OpenStack, etc.

multipass launch --mem 10G --cpus 6 --disk 20G --name knative
multipass shell knative

1. Install Microk8s

sudo snap install --classic microk8s
microk8s.status --wait-ready
sudo snap alias microk8s.kubectl kubectl

2. Install Istio

echo 'N;' | microk8s.enable istio
# check status of istio pods
kubectl get pods -n istio-system

3. Install Knative

export KNATIVE_071=releases/download/v0.7.1
export KNATIVE_070=releases/download/v0.7.0
export KNATIVE_URL=https://github.com/knative

# if you see errors, re-run next command as is
kubectl apply --selector knative.dev/crd-install=true \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/monitoring.yaml

# After this command, total of ~5G of disk and ~2G of memory used
kubectl apply \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml

# ensure all pods are running
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods -n knative-eventing

###
# OPTIONAL - enable monitoring, which increases resource requirements (ie a lot more memory and cpu)
###
kubectl apply \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/monitoring.yaml
# check that all monitoring pods are running
kubectl get pods -n knative-monitoring

4. Hello World

From here you can experiment with several of the hello world examples in Knative. Each component – build, serving, eventing – can be used independently. At the moment, the serving examples leverage local (docker) builds rather than knative build. Why? The local build is still somewhat simpler than a knative build. I imagine this will change over time, showing better integration between the components. However, in the next post, we’ll show a complete solution that relies on knative only. You’ll be led through an example that will use knative to build and serve a simple function, and a local container repository to host images.

For now, and to test a simple example, review the serving hello world samples. If you enjoy Java and Spark, try the helloworld-java-spark example in github.

5. Steps Summary

# Launch a VM capable of running all four components of knative
multipass launch --mem 10G --cpus 6 --disk 20G --name knative
multipass shell knative

# Install MicroK8s
sudo snap install --classic microk8s
microk8s.status --wait-ready
sudo snap alias microk8s.kubectl kubectl

# Install Istio
echo 'N;' | microk8s.enable istio
# check status of istio pods
kubectl get pods -n istio-system

# Install essential components of knative
export KNATIVE_071=releases/download/v0.7.1
export KNATIVE_070=releases/download/v0.7.0
export KNATIVE_URL=https://github.com/knative

# if you see errors, re-run next command as is
kubectl apply --selector knative.dev/crd-install=true \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/monitoring.yaml

# After this command, total of ~5G of disk and ~2G of memory used
kubectl apply \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml

# ensure all pods are running
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods -n knative-eventing

Next Steps

We’ve only scratched the surface of Knative. The next set of posts will focus on each major feature of Knative. The posts will include getting started guides with MicroK8s and Knative. That combination works great for local discovery, development, and as part of your continuous integration pipeline.

Resources

The post Getting Started with Knative on Ubuntu appeared first on Ubuntu Blog.

Canonical Design Team: The 10 new rules of open source infrastructure

$
0
0
The 10 new rules of Open Source infrastructure

Recently, I gave a keynote at the Cloud Native / OpenStack Days in Tokyo titled “the ten new rules of open source infrastructure”. It was well received and folks pointed out on Twitter that they would like to see more detail around those ten rules. Others seemed to benefit from clarifying commentary. I’ve attempted to summarize the points I’ve made during the talk here, and happy to have a conversation or add more rules based on your observations in this space over the last ten years. I strongly believe there are some lasting concepts and axioms that are true in infrastructure IT, and documenting some of them is important to guide decisions that go into the next generation thinking as we evolve in this space.

1 Consume unmodified upstream.

The time for vendors to proclaim that they are able to somehow make open source projects “enterprise-ready” by releasing a “hardened” version of infrastructure software based on those upstream projects is over. OpenStack has been stable for multiple releases now and capable of addressing even the most advanced use cases and workloads without any vendor interference at all. See CERN. See AT&T.

I believe this is the most important rule of them all because it is self-limiting to not follow it: why would you restrict the number of people able to work on, support and innovate with your platform in production by introducing downstream patches? The whole point of open infrastructure is to be able to engage with the larger community for support and to create a common basis for hiring, training and innovating on your next-generation infrastructure platform.

2 Infrastructure-is-a-Product.

When implementing the next generation infrastructure, it is important to remember that you are essentially entering a market with competing alternatives for your constituency. In almost all cases that is a public cloud alternative, but even legacy stacks based on older technologies such as VMware can pose a significant risk, especially if your workload adoption is slow. Outreach, engagement with developers, actively working on migrating workloads on to the new platform is critical to reaching a lower cost per computing unit then alternative platforms. The sooner that point is reached, the better.

Infrastructure should also not be treated as something “hand-crafted”. Large scale implementations are without exception based on standardization of components and simplicity in architecture and parameter tuning. Limiting the ability to transfer knowledge of existing clusters to new teams, losing the ability to train new staff, or recover from the “expert departure” in your team is key and the only way to achieve this is by avoiding customized reference architectures introducing technical debt into your infrastructure.

3 Automate for Day 1826.

Almost all teams have not automated to the degree they ought to, and most of them realize this at some level but fail to do something about it. Part of the reason why certain tools have become popular with operators is that they address the first 80% of all automation use cases quite nicely but do so at the cost of being able to reasonably address the rest. The result is that lifecycle management events such as upgrades, canarying, expansion and so on remain complicated and fail to reduce the amount of energy that event consumes. A simple test is this: if you still have to SSH into a server to perform some task you simply have not automated enough. Any and all events concerning that machine should be addressable via API and through comprehensive automation and orchestration setup.

When choosing your orchestration automation, assume that the technology stack will change over the course of your hardware amortization period (typically five years). Your VMware of today might be an OpenStack of tomorrow, might turn into a Kubernetes cluster on top, right next to it on bare metal, or even be replaced by it. You just don’t know. Once the “Kubernetes of serverless” crystallizes, you will have to automate the deployment and integration of that technology. Maintaining the same operational paradigm across those events is more important than ever as upstream innovation cycles shorten and the number of releases per year increase.

4 Run at capacity on-prem. Use public cloud as overflow.

If the goal is providing the best economics in the data center, running your on-premise infrastructure as close to capacity as possible is a natural consequence. Hardware should be chosen to provide the best value for the money invested in it, which may not always lead to the lowest cost in that investment, but will lead to the best economics overall, especially if the goal is to achieve comparable cost structures as public alternatives.

That said, don’t mistake this rule as a call to avoid public cloud. On the contrary, I recommend working with a minimum of two public cloud providers in addition to having a solid on-premise strategy that fulfills the economic parameters of our organization. Having two public cloud partners allows for healthy competition and enforces cloud-neutral automation in your operations, a key attribute of a successful multi-cloud strategy. While having certified administrators for public clouds is good, cloud API agnostic automation is better.

5 Upgrade, don’t backport.

At Ubuntu, we have always endorsed this paradigm, and it continues to be true for open infrastructure. As upstream project support cycles shorten (think of the number of supported releases and maintenance windows for OpenStack and Kubernetes, for example), getting into the habit of upgrading rather than introducing technical debt that is exacerbating the costs of literally every lifecycle management event that comes after its introduction is one of the most important rules to follow. With the right type of automation process in place, upgrading should be predictable, and a solvable problem in a reasonable amount of time. Running your infrastructure as a product, and consuming unmodified upstream code are both contributory factors in that predictability, and without them, this is an almost unmanageable task at any scale.

6 Workload placement matters.

In a way, it’s understandable. The whole point of implementing a private cloud is so that you don’t have to worry about this. However, when you do care, it is typically almost impossible to establish any kind of debugging baseline. Clouds are by nature dynamic, so debugging what happened when some service-level violation occurred needs to take the changing nature of the infrastructure into account. All clouds of reasonable volume have this problem, and most operations teams ignore the necessity of maintaining the correlation between what happens at the bare metal level all the way to what happens at the virtual and container level. The smaller the unit of consumable compute is (think large VMs vs small VMs vs fat containers, vs … you get the idea), the more dynamic the environment typically is, and establishing causation between symptoms and root cause gets exponentially harder the more layers you introduce. Think about workload placement as you onboard tenants and establish the necessary telemetry to capture those events in their context. This will lead to predictive analysis, which ultimately may allow you to introduce AI into your operations (and the larger/more complex your cloud infrastructure is, the more urgent that will be).

7 Plan for transition

I made this point earlier, but it is unrealistic to expect a specific set of hardware to be tied to a specific infrastructure throughout the entirety of its lifespan. This is superbly exemplified in edge use cases. Telco edge specifically will have to address changing workload requirements over the next three years as some workloads will transition into containerized versions of them, others may remain as a VM, and some remain on bare metal. Thus, the nature of the edge and its management requirements will evolve over the next three years. Consequently, it’s not the containerized “end state” that is worth designing for, but the state of transition. Again, the full automation and identical operational paradigm across bare-metal provisioning, virtual machine management, OpenStack and Kubernetes will play a crucial role in achieving this design goal.

8 Security should not be special

Security patches are patches; security operations is simply – operations. Most cloud projects are devised between developers and operations and infrequently do they involve the often separate “security team”. What happens is that after much debate, discussion and creation of a roll-out plan, security teams are confronted with “done deals” to which they mostly react by throwing water on all of those plans. Security is a mindset and an original posture that should be exhibited from the start and continued to be focused on as part of the requirements. Security isn’t special, it’s just as important and critical as any other non-functional requirement that has to be met by the cluster in order to meet the stakeholder’s expectations. So involve security early, often and stay close, lest you’re willing to start over deep into the process.

9 Embrace shiny objects

The whole point of open infrastructure is to foster innovation and to give companies a competitive edge through the acceleration of their next-generation application rollout. Why stand in the way? If your developers want containers, why not? If your developers want serverless, why not? Being part of the solution rather than deriding new technology stacks as “shiny objects” only highlights a lack of confidence in the existing operational paradigm and automation. Sure, it’s fun to engage in some commentary over a beer after work – and that is exactly where it should stay.

10 Be edgy, go Micro!

Shameless plug: I work for Canonical and I care very much about two innovative projects I would like to highlight.

Microstack is a project for OpenStack Edge use cases and currently in beta. It installs a full OpenStack cluster on a single node and will support limited clustering once it goes GA. I’m super excited about it because it elegantly solves the majority of the requirements of small form-factor OpenStack in a clear and concise format, using the Snap application container format. It’s innovative, small, and deserves your attention.

Microk8s is the same for Kubernetes. A single snap package to install on any of the compatible Linux distributions, with a feature-rich add-on system that lets you provision service meshes such as Istio and Linkerd, Knative, Kubeflow and more. Check it out on microk8s.io.

Because both are distributed as snaps, they can be used in an IoT/Devices context as well as in a cloud or data center context. They can be installed with a single command:

$ snap install microstack --beta --classic
$ snap install microk8s --classic

Conclusion

And that’s it – observations over the last ten years in the cloud industry, crystallized as ten new rules. Despite the intro image above, I don’t intend those rules to be taken as ‘commandments’ and I’m certainly no Moses in this regard. I am simply summarizing my observations that I’ve made over the course of the last 10 years as an OpenStack architect and now product manager in this space.

The post The 10 new rules of open source infrastructure appeared first on Ubuntu Blog.

Ubucon Europe 2019: 2nd Call For Volunteers

$
0
0

Ticking, ticking … The clock doesn’t stop.

We are just less than 3 months from the big event UbuconEU2019 and it’s time to reinforce the dissemination of the event and call for the participation of volunteers.

Yes, we need your support now, during and after the event. Check out Trello to see where you can help and mark your support on the day of the event by signing up here.

What can you do?

Writing articles, reviewing and translating texts, creating visual content, and a lot more!

Get involved in the ubuntu spirit.


Canonical Design Team: Handy productivity software for your home and office

$
0
0

Discovery is an integral part of any store experience. Sometimes, you know what you want and need, and the experience can be short and transactional. On other occasions, you want to explore, and search for new things. This applies equally to shopping malls as it does to software.

In this article, we would like to give you an overview of several rather interesting entries from the Productivity section in the Snap Store, to help you get started on your discovery journey. While Linux users are familiar with the tried-and-tested set of a small number of popular, long-time players, there are many colorful, unique applications out there, waiting to be found and used. Let’s browse around.

Celtx

Scriptwriting isn’t an everyday activity, but when you do decide to create your first theater performance or a screenplay, you realize that the task is far more complicated than it first seems. Celtx is a fully featured media pre-production tools suite, designed to help people organize and manage their scripts, novels and other projects.

The application can be a little overwhelming at first, as it comes with lots of options and features. Its tabbed interface lets you manage multiple projects at the same time, from audio and comic books to complete stage plays. Celtx also comes with several sample projects to help you get started.

In your project files, you can add scenes, and then annotate each one with notes, index cards, the list of characters (cast), and other information. This can help you – as well as other people collaborating on your project to more easily browse and manage potentially vast and complex sets of text and media. Reports are also very practical, especially if you are not too familiar with the specific work, and they let you peruse the material and get a better understanding at a glance.

The Master catalog is another highly useful feature – it contains all the resources you have put into your script, including media files, characters, locations, props, and other assets. You have the ability to add tags, so you can find the necessary content more quickly, or even export it so other people can use it.

Celtx is a complex, advanced toolbox for serious writers and production managers, and it hides an impressive volume of capabilities behind an unassuming interface. You can find and install the application from the Snap Store.

Imaginary Teleprompter

Perhaps you’re not writing screenplays, but maybe you do a lot of presentations, lectures or maybe speeches? Some people like to have a visual aid to help them deliver their material. For example, most office suite presentation applications come with a split mode, which lets you show slides on one screen but then have the thumbnails and comments on another. There are also programs that feature an overlay timer, to help you pace your presentations. And then, we’ve all seen blockbuster movies and series where important people use transparent teleprompters positioned slightly off camera, with important text scrolling up.

Studio equipment is out of the budget reach and need of most people, but having a tool that can help you sharpen up presentation is quite useful. Imaginary Teleprompter is a professional-grade, free teleprompter. It comes with a long list of powerful features like text mirroring and dual-screen support, rich text editing, image support, custom styles, tablet mode, webcam mode, auto-save and hardware-accelerated graphics.

Imaginary Teleprompter is relatively easy to use – and its default presentation gives you helpful hints on how to get started. For instance, you can use anchors – very much like HTML anchors, so you can jump between sections of your text. You also have the option to slow down or speed up the text scrolling using arrow keys. You can also pause the text.

The editor interface is very much like any word processor, so you have a lot of flexibility in how you create your content. At the top of the interface, there are several major functions, including the in-frame orientation (flip, mirror, etc.), you can use additional screens, change the color scheme, as well as decide where the actual focus is – by default the words you’re supposed to say are shown in the middle of the screen vertically. Then, you can also change the speed and font size, as well as display a timer.

Using Imaginary Teleprompter is quite fun, and it can also be used for recreational purposes, like movie-quite fun parties, or even Karaoke, come to think of it. But its main focus is to help people improve the rhythm and precision of their presentations, and it does that remarkably well from a simple, web-like interface.

The application is available in the Snap Store.

Recollectr

Memory is a fickle thing. Which is why many people like to keep notes. If you’re after a powerful note-taking and reminder application, Recollectr probably fits the bill. When you install and launch the program for the first time, it will display the default, Quickstarter Guide to help you get around.

Recollectr comes with lots of useful options. For instance, you can import HTML and md files. You can also style and format your notes, with code snippets, bulleted or numbered lists, quotes, and more. You can use create todo items with checkboxes and then set reminders. It allows you to add images to your notes, but from what I’ve been able to test, this only works by pasting an image from a clipboard rather than pointing to a file on the disk. Recollectr also comes with lots of keyboard shortcuts, designed to streamline your work.

When not in immediate use, Recollectr sits quietly in the system area, waiting for your prompt. Once you start accumulating notes, you will also begin to use and appreciate the search functionality. Finally, you can pin or archive notes to keep the clutter down or prioritize your work. Last but not least, paid users get encrypted cloud backups.

Recollectr aims to be several things, including a task manager with scheduled reminders, note keeper as well as an archive of useful factoids and data, and it manages to blend all this functionality without feeling busy or complicated.

Heimer

Notes can keep you tidy and organized, but what if you want to map and visualize complex activities or ideas without going full project manager? Heimer may be the answer. It’s a mind-map diagram-based creation tool, with focus on speed and simplicity.

The workflow is indeed simple and straightforward – you create blocks, which you can fill in with text, with optional color coding. You can then drag & drop the idea blocks around and associate them with others, to create the map you need. You can duplicate items in the diagram for faster work.

Heimer best works with an external mouse device, so you can quickly zoom in and out, or drag elements around. If you don’t like any of your changes, there’s an unlimited undo functionality. Once you’re happy with your maps, you can export them, including PNG files with a transparent background (the blue canvas is visible only in the application’s workspace).

Heimer works well on its own, but then you can use it alongside other productivity applications, like perhaps Celtx or Recollectr, especially if you need to visualize dependencies and hierarchy among different objects, ideas or concepts in your projects.

Summary

Productivity comes in all shapes and forms – and we’ve only covered a small facet of this category here. There’s actual value in the software we covered here, but then the discovery journey was also quite fun. Whether you’re just looking to somewhat streamline your task keeping or work on serious projects, there’s something to be found in the Store. We will come back in the future with other compilations from other categories. Meanwhile, if you have any recommendations on smart, practical software, please join our forum and let us know. We’re always happy to explore and learn new things.

Photo by Saad Sharif on Unsplash.

The post Handy productivity software for your home and office appeared first on Ubuntu Blog.

Ubuntu Podcast from the UK LoCo: S12E16 – Glider Rider

$
0
0

This week we’ve been learning about the crazy world of flat earthers. In a change to our scheduled programming we discuss Alan’s new lean podcasting experiment, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 16 of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Stuart Langridge are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Jonathan Carter: PeerTube and LBRY

$
0
0

I have many problems with YouTube, who doesn’t these days, right? I’m not going to go into all the nitty gritty of it in this post, but here’s a video from a LBRY advocate that does a good job of summarizing some of the issues by using clips from YouTube creators:

(link to the video if the embedded video above doesn’t display)

I have a channel on YouTube for which I have lots of plans for. I started making videos last year and created 59 episodes for Debian Package of the Day. I’m proud that I got so far because I tend to lose interest in things after I figure out how it works or how to do it. I suppose some people have assumed that my video channel is dead because I haven’t uploaded recently, but I’ve just been really busy and in recent weeks, also a bit tired as a result. Things should pick up again soon.

Mediadrop and PeerTube

I wanted to avoid a reliance on YouTube early on, and set up a mediadrop instance on highvoltage.tv. Mediadrop ticks quite a few boxes but there’s a lot that’s missing. On top of that, it doesn’t seem to be actively developed anymore so it will probably never get the features that I want.

Screenshot of my MediaDrop instance.

I’ve been planning to move over to PeerTube for a while and hope to complete that soon. PeerTube is a free software video hosting platform that resemble YouTube style video sites. It’s on the fediverse and videos viewed by users are shared by webtorrents to other users who are viewing the same videos. After reviewing different video hosting platforms last year during DebCamp, I also came to the conclusion that PeerTube is the right platform to host DebConf and related Debian videos on. I intend to implement an instance for Debian shortly after I finish up my own migration.

(link to PeerTube video if embedded video doesn’t display)

Above is an introduction of PeerTube by its creators (which runs on PeerTube so if you’ve never tried it out before, there’s your chance!)

LBRY

LBRY App Screenshot

LBRY takes a drastically different approach to the video sharing problem. It’s not yet as polished as PeerTube in terms of user experience and it’s a lot newer too, but it’s interesting in its own right. It’s also free software and implements it’s own protocol that you access on lbry:// URIs and it prioritizes it’s own native apps over accessing it in a web browser. Videos are also shared on its peer-to-peer network. One big thing that it implements is its own blockchain along with its own LBC currency (don’t roll your eyes just yet it’s not some gimmick from 2016 ;) ). It’s integrated with the app so viewers can easily give a tip to a creator. I think that’s better than YouTube’s ad approach because people can earn money by the value their video provides to the user, not by the amount of eyes they bring to the platform. It’s also possible for creators to create paid for content, although I haven’t seen that on the platform yet.

If you try out LBRY using my referral code I can get a whole 20 LBC (1 LBC is nearly USD $0.04 so I’ll be rich soon!). They also have a sync system that can sync all your existing YouTube videos over to LBRY. I requested this yesterday and it’s scheduled so at some point my YouTube videos will show up on my @highvoltage channel on LBRY. Their roadmap also includes some interesting reading.

I definitely intend to try out LBRY’s features and it’s unique approach, although for now my plan is to use my upcoming PeerTube instance as my main platform. It’s the most stable and viable long-term option at this point and covers all the important features that I care about.

Ubuntu Blog: Getting started with AI

$
0
0
Getting started with Artificial Intelligence

From the smallest startups to the largest enterprises alike, organisations are using Artificial Intelligence and Machine Learning to make the best, fastest, most informed decisions to overcome their biggest business challenges.

But with AI/ML complexity spanning infrastructure, operations, resources, modelling and compliance and security, while constantly innovating, many organizations are left unsure how to capture their data and get started on delivering AI technologies and methodologies.

Now is the time to take the plunge. Whether on-prem or in the cloud, you can establish an AI strategy that connects to your business case, forming a scalable AI solution that is focused on your particular data streams.

Whitepaper highlights:

  • Key concepts in AI/ML
  • Factors to consider and pitfalls to avoid
  • Roles, skill sets and expertise needed for success
  • Infrastructure and applications for multi-cloud operations for the full AI stack
  • Building a readiness plan to deliver AI insights powered by your data: discovery, assessment, design, implementation and operation and feedback

To view the whitepaper complete the form below:

The post Getting started with AI appeared first on Ubuntu Blog.

Podcast Ubuntu Portugal: Ep 60 – Rumo ao Monte da Lua

$
0
0

O drama do abandono da arquitectura de Intel a 32bits e as novidades da Ubucon Europe 2019. Sem esquecer pormenores muito pouco interessantes da vida dos intervenientes deste podcast… Já sabes, ouve, subscreve e partilha!

  • https://discourse.ubuntu.com/t/intel-32bit-packages-on-ubuntu-from-19-10-onwards/11263/
  • https://ubuntu.com/blog/statement-on-32-bit-i386-packages-for-ubuntu-19-10-and-20-04-lts
  • https://ubucon.eu
  • Propõe apresentações: https://sintra2019.ubucon.org/call-for-papers-announcement/
  • Faz-te voluntário: https://framaforms.org/volunteers-voluntarios-ubucon-europe-2019-sintra-1559899302

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Outra forma de nos apoiarem é usarem os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dollares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dollares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

“Dingo”by Central Highlands Regional Council Libraries is licensed under CC BY 2.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Daniel Pocock: Leadership and gossip in Debian

$
0
0

On a daily basis now, people ask me questions that remind me about the leadership problems in Debian. When I visit a free software event or another free software community, it comes up frequently.

It is a horrible situation. When people remind me about the emails sent by Chris Lamb in September 2018, there is nothing positive to say. It puts me in a position where there is no response other than asking them to question Lamb's credibility. This inevitably rubs off on Debian as a community.

When people realize that this issue relates to my private life and has nothing to do with my competence as a Debian Developer, they quickly apologize for intruding. On those occasions when I've explained the situation to people in any detail, the colour of their face has visibly changed, demonstrating an acute combination of sadness and anger at the way certain people in the Debian community, including the former leader, have behaved.

People have asked me why I didn't try to speak to Lamb. In fact, I tried. He lives in London, I visit there almost every month. I wrote to him numerous times and he always refused.

Between September and December 2018, I also wrote to a number of other members of the project to try and set up a meeting. They either didn't respond or declined. Yet I kept hearing more and more reports of Lamb's gossipmongering.

In my last blog, I revealed that one of the challenges I've faced was the death of my father. People simply can't understand why Lamb and his sidekicks would be undermining another Debian Developer, involved in the community for more than 20 years, at such a difficult time.

It is not easy to reduce a subject like that to a blog post. No cat picture can come close to explaining it. I don't intend to write more, nor can I, without violating the privacy of other people. Yet one of Lamb's missed opportunities as a leader is that he expected everything to be reduced to email or IRC. So he never actually knew any of this.

Earlier this year, somebody suggested taking a month off from Debian. It really misses the point. I never chose to have my private life and my professional life interconnected in this way. It was imposed on me by somebody who had the title of leader in an organization of 1,000 Developers but had dedicated more time to some people than others.

That brings me to another point: is everybody who has a public profile in the free software community going to be subject to similar attacks and criticism at a time of personal tragedy? Having mentored in GSoC and Outreachy for many years, I've frequently observed the challenges people go through making their first commit on a public repository or their first post to a mailing list. Many of them would never have done so if they saw what I've been put through by rogue elements of the Debian community.

Ultimately, as the leader created a state of hostility through inappropriate gossip, the only real solution is for the current leader of the project to publicly denounce the gossip and put the issue to rest for once and for all.

David Tomaschik: Hacker Summer Camp 2019: What I'm Bringing & Protecting Yourself

$
0
0

I’ve begun to think about what I’ll take to Hacker Summer Camp this year, and I thought I’d share some of it as part of my Hacker Summer Camp blog post series. I hope it will be useful to veterans, but particularly to first timers who might have no idea what to expect – as that’s how I felt my first time.

Since it’s gotten so close, I’ll also talk about what steps you should take to protect yourself.

Packing

General Packing

I won’t state the obvious in terms of packing most of your basic needs, including clothing and toiletries, but I will remind you that Las Vegas will be super hot. Bring clothes for hot days, and pack deodorant! Keep in mind that some of the clubs have a dress code, so if that’s your thing, you’ll want to bring clubbing clothes. (The dress code tends not to be too high, but often pants and a collared shirt.)

I will suggest bringing a reusable water bottle to help cope with the heat. Just before last summer camp, I bought a Simple Modern vacuum insulated bottle, and I absolutely love it. I’ll bring it again this year to stay hydrated. Because I hate heat, I’ll also be bringing a cooling towel, which is surprisingly effective at cooling me off. Perhaps it’s a placebo effect, but I’ll take it.

Remember that large parts of DEF CON are cash only, so you’ll need to bring cash (obviously). At least $300 for a badge, plus more for swag, bars, etc. ATMs on the casino floors are probably safe to use, but will still charge you fairly hefty fees.

Tech Gear

There’s two schools of thought on bringing tech gear: minimalist and kitchen sink. I happen to be in the kitchen sink side of things. I’ll be bringing my laptop and about a whole bunch of accessories. In fact, I have a whole travel kit that I’ll detail in a future post, but a few highlights include:

On the other hand, some people want the disconnected experience and bring little to no tech. Sometimes this is because of concerns over “being hacked”, but sometimes this is just to focus on the face-to-face time.

Shipping

There are some consumables where I just find it easier to ship to my hotel. Note that the hotel will charge you for receiving a package, but I still find it cheaper/easier to have these things delivered directly.

Getting a case of water delivered is much cheaper than buying from the hotel gift shop. Another option is to hit up a CVS or Walgreens on the strip for some bottled water.

I’m a bit of a Red Bull addict, so I often get a few packs delivered to have on hand. The Red Bull Red Edition is a nice twist on the classic that’s worth a try if you haven’t had the pleasure.

Safety & Security

DEF CON has a reputation for being the “most dangerous network in the world”, but I think this is completely overblown. It defies logic that an attacker with a 0-day on a modern operating system would use it to perform untargeted attacks at DEF CON. If their traffic is captured, they’ve burned their 0-day, and probably to grab some random attendees data – it’s just not worth it to them.

That being said, you shouldn’t make yourself a target either. There are some simple steps you can (and should) take to protect yourself:

  • Use a VPN service for your traffic. I like Private Internet Access for a commercial provider.
  • Don’t connect to open WiFi networks.
  • Don’t accept certificate errors.
  • Don’t plug your phone into strange USB plugs.
  • Use HTTPS

These are all simple steps to protect yourself, both at DEF CON, and in general. You really ought to observe them all the time – the internet is a dangerous place in general!

To be honest, I worry more about physical security in Las Vegas – don’t carry too much cash, keep your wits about you, and watch your belongings. Use the in-room safe (they’re not perfect, but they’re better than nothing) to protect your goods.

Be aware of hotel policies on entering rooms – ever since the Las Vegas shooting, they’ve become much more invasive with forcing their way into hotel rooms. I recommend keeping anything valuable locked up and out of sight, and be aware of potential impostors using the pretext of being a hotel employee.

Good luck, and have fun in just over a week!


Lubuntu Blog: Lubuntu 18.10 End of Life and Current Support Statuses

$
0
0
Lubuntu 18.10, our first release with LXQt, has reached End of Life as of July 18, 2019. This means that no further security updates or bugfixes will be released. We highly recommend that you update to 19.04 as soon as possible if you are still running Lubuntu 18.10. The only currently-supported releases of Lubuntu today […]

Ubuntu Blog: Issue #2019.07.29 – Kubeflow Releases so far (0.5, 0.4, 0.3)

$
0
0
  • Kubeflow 0.5 simplifies model development with enhanced UI and Fairing library– The 2019 Q1 release of Kubeflow goes broader and deeper with release 0.5. Give your Jupyter notebooks a boost with the redesigned notebook app. Get nerdy with the new kfctl command line tool. Power to the people – use your favourite python IDE and send your model to a Kubeflow cluster using the Fairing python library. More training tools added as well, with an example of XGBoost and Fairing.
  • Kubeflow 0.4 Release: Enhancements for Machine Learning productivity– The 2018 Q4 release will supercharge your productivity! Fairing, a python library to keep you in python code, hits alpha. Katib – the necessary hyperparameter tool for production ready models – adds support for TFjob. Jupyter notebook CRD – you can create a notebook from the command line. Kubeflow Pipelines! With pipelines, you can codify your workflows, removing boilerplate drudgery, adding hours back to your life every week.
  • Kubeflow 0.3 Simplifies Setup & Improves ML Development– The 2018 Q3 release (see the pattern with releases?) sees the beginning of expanded options for training and serving. Kubeflow PyTorch and Chainer CRDs expand the model training options. Katib, hits alpha. Kubebench, a too for benchmarking, will assist you in assessing your hardware. Tensorflow Data Validation, part of TFX, added to Kubeflow Jupyter images. Model serving improvements through TF Serving and Nvidia’s TensorRT Inference Server
  • Use Case Spotlight: AI in medicine – deriving intelligence from a sea of data– This article shows the power of what one person with passion can do to make an impact. Read this inspiring article about how an AI expert took on his toughest project ever: writing code to save his son’s life.


The post Issue #2019.07.29 – Kubeflow Releases so far (0.5, 0.4, 0.3) appeared first on Ubuntu Blog.

Jono Bacon: Ryan Bethencourt on Growing Sustainable Food, Shark Tank, and Wild Earth

$
0
0

What does bioscience innovation, sustainable pet food, and Shark Tank have in common? Ryan Bethencourt, that’s who.

Ryan is a pretty accomplished guy. He has studied business and biotech, molecular biopharmaceuticals, genetics, and brain tumor angiogenesis at Cambridge, Berkeley, Warwick, Stanford, and Yale. Oh, and he went on Shark Tank and raised $550k from Mark Cuban.

Ryan’s passion for biosciences took him to be an investment partner for Babel Ventures, a $26m fund focused on investing in emerging biotech companies (including Occamz Razor, Nebia, Vitagene, CUE, Forever labs, California Dreamin, Zbiotics and Finless Food) sin Seed and Series A. He is also an advisor to companies and organizations including UCSF Health Hub, Memphis Meats, Berkeley Ultrasound, The Thiel Foundation, Vitro Labs, and many others.

Most recently though, he formed Wild Earth, a company producing sustainably produced pet food. He took his company onto Shark Tank and raised one of the largest amounts for his season. He works weekly with Mark Cuban.

In this episode of Conversations With Bacon we get into how innovation has evolved in biosciences, the role of the credit crunch, the horror show that is the pet food production industry, how Ryan and his team have approached growing better food (and the R&D and more), and where the future of biosciences lies. Oh, and of course, we delve into his experience of going on Shark Tank, behind the scenes, and what it is like to work with Mark Cuban. This is a really fascinating look into what the future of food looks like.

The post Ryan Bethencourt on Growing Sustainable Food, Shark Tank, and Wild Earth appeared first on Jono Bacon.

Ubuntu Blog: Monitoring at the edge with MicroK8s

$
0
0

One of the benefits of the internet of things is the feedback loop created between operators and connected devices. Instrumented devices communicate sensor data to operators, who monitor the data, take actions and elicit further control decisions. To leverage the benefit of such control feedback loops in IoT deployments, it is therefore necessary to have monitoring tools in the stack. 

This blog demonstrates how to easily deploy monitoring tools at the edge using Kubernetes. In IoT scenarios, such a deployment is most efficient when carried out at the edge by bringing the benefits of privacy, latency and bandwidth cost. 

Luckily, MicroK8s the single node Kubernetes, caters for such use cases. The beauty of MicroK8s is that deployment can be done in a couple of commands and in under a minute. MicroK8s can even fit on a Raspberry Pi. In a nutshell, it is possible to build a monitoring stack on MicroK8s, deployable anywhere, even at the extreme edge. From a technical perspective, this simple solution is enabled by following popular open source components, delivered with MicroK8s:  

  • Grafana for the front-end analytics dashboard
  • Prometheus for the back-end time-series database (ideal for sensor data)

To get up and running, follow these steps:

Installing MicroK8s, Grafana and Prometheus

First, install the MicroK8s snap

sudo snap install microk8s --classic

Once installed, it is possible to list all the add ons that are delivered with MicroK8s.

microk8s.status

(alternatively)

 microk8s.enable --help

None of the MicroK8s add-ons are enabled by default. Therefore Grafana and Prometheus will need to be enabled upon installation. Here is how to enable these applications:

microk8s.enable dashboard prometheus

There is no specific command to enable Grafana. It is enabled automatically when the Kubernetes dashboard is enabled.

Accessing the Kubernetes dashboard

For the next steps, the kubectl command will be invoked. It is possible to alias this command because it is namespaced Microk8s. This step is simple to reverse if it loses convenience.

sudo snap alias microk8s.kubectl kubectl

The access token for the Kubernetes dashboard can then be retrieved by invoking kubectl as shown below:

export TOKEN=$(kubectl describe secret $(kubectl get secret | awk '/^dashboard-token-/{print $1}') | awk '$1=="token:"{print $2}') && echo -e "\n--- Copy and paste this token for dashboard access --\n$TOKEN\n---"

To access the Kubernetes dashboard, it will be necessary to create a secure channel to the cluster with the command below:

kubectl proxy &

The dashboard will then be accessible at:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Copying and pasting the token generated above will grant you access the web based command interface to your cluster.

Accessing Grafana and Prometheus

As an initial step, we can confirm that Grafana and Prometheus are running on the cluster.

kubectl cluster-info

Once confirmed, you can view these services in the Kubernetes dashboard by selecting ‘monitoring’ under ‘Namespaces’, and then clicking ‘Services’. The list of monitoring services running on the cluster will then be displayed, along with the associated cluster IP addresses, internal endpoints and ports.

The Grafana and Prometheus UIs can then be simply accessed by entering the service IP and ports in the browser, according to the following format:

<cluster IP>:<port>

For Grafana, the username and password will be: admin/admin.

Grafana comes pre-configured with Prometheus as a data source.

The Prometheus UI can be accessed in a similar manner. No username and password will be required.

Going further

Further to this easy setup, it is possible to feed sensor data to the Prometheus database. Prometheus has several client libraries for exporting sensor measurements. Additionally, panels and custom dashboards will be created in Grafana according the metrics that are tracked for a particular use case.

Resources

MicroK8s

Running Microk8s on Raspberry Pi

Prometheus client libraries

Grafana basic concepts

The post Monitoring at the edge with MicroK8s appeared first on Ubuntu Blog.

The Fridge: Ubuntu Weekly Newsletter Issue 589

$
0
0

Welcome to the Ubuntu Weekly Newsletter, Issue 589 for the week of July 21 – 27, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Viewing all 17727 articles
Browse latest View live