Quantcast
Channel: Planet Ubuntu
Viewing all 17727 articles
Browse latest View live

The Fridge: Ubuntu Weekly Newsletter Issue 594

$
0
0

Welcome to the Ubuntu Weekly Newsletter, Issue 594 for the week of August 25 – 31, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License


Stephen Michael Kellat: Monitoring Dorian

$
0
0

Currently the hurricane known as Dorian is pounding the daylights out of The Bahamas. The Hurricane Watch Net is up and the Hurricane VoIP Net is up. Presently members of the public can monitor audio from the Hurricane VoIP net by putting http://74.208.24.77:8000 into a suitable streaming media player like VLC to be able to bring up net audio. Updates are generally on the hour. Members of Ubuntu Hams looking to follow matters on EchoLink should utilize the *WX5FWD* and *KC4QLP-C* conferences.

The storm is moving fairly slowly. This event is likely to continue for a while.

Ubuntu Blog: Design and Web team summary – 03 September 2019

$
0
0

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

Build and release dqlite.io

The dqlite.io site has been built and deployed. It will gain a discourse in the near future but for now, the documentation is housed in the project.

Ubuntu.com has been upgraded to Vanilla 2.3.0

A vast majority of the work was updating the markup to match the new classes and mark up structure required for Vanilla 2.

Design takeover templates

To speed up delivery of takeovers we are planning to create a set of takeovers which can be reused. This should speed up the design and development of the takeovers going forward.

Base

Base is the team that underpins our toolsets and architure of our projects. They maintain the CI and deployment of all websites we maintain. 

Certification development

Development of the certification site has been completed and just in QA before being released and replace the current certification website.

MAAS

The MAAS squad develop the UI for the maas project.

Convert settings to React

Most of the parts of settings are now in React, the only outstanding ones are DHCP snippets and Scripts, which will be completed in the next iteration. 

Machine summary network card design

The machine summary page shows a summary of the physical characteristics of a machine, we are adding a new card this cycle, that displays the network characteristics. After a number of iterations, we decided on including the fabric (untagged traffic only) that the interface is connected to, link speed and status, DHCP and SR-IOV overview. 

Vanilla

The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.  

login.ubuntu.com on Vanilla

On the final stretch to migrating login.ubuntu.com to Vanilla 2.3.0, this iteration we’ve completed all account pages, updated the email template and some miscellaneous pages found in the IA.

JS in documentation

We currently don’t document that some components require JavaScript, and that users will need to provide the implementation. First pass is adding a notification within the docs page with a functionality section.

 This iteration we designed a pattern to display the JS required for a Vanilla components,  which has been designed and will be developed in the coming weeks to supersede our current implementation.

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Guided feature tour for publishers

Recent work was focused on building a guided tour that allows us to highlight new and/or important features to our users. Via small steps, a tour explains the concept, functionality and different UI elements of a feature helping to educate and onboard users.

The very first tour implemented can be found within the snap listing page. This aims to help publishers improve quality and overall completion of their public listing in the Snap Store.

Robert Ancell: GUADEC 2019 - Thessaloniki

$
0
0
I recently attended GUADEC 2019 in Thessaloniki, Greece. This is the seventh GUADEC I've attended, which came as a bit of a surprise when I added it up! It was great to catch up in person (some again, and some new!) and as always the face to face communication makes future online interactions that much easier.

Photo by Cassidy James Blaede

This year we had seven people from Canonical Ubuntu desktop team in attendance. Many other companies and projects had representatives (including Collabora, Elementary OS, Endless, Igalia, Purism, RedHat, SUSE and System76). I think this was the most positive GUADEC I've attended, with people from all these organizations actively leading discussions and a general consideration of each other as we try and maximise where we can collaborate.

Of course, the community is much bigger than a group of companies. In particular is was great to meet Carlo and Frederik from the Yaru theme project. They've been doing amazing work on a new theme for Ubuntu and it will be great to see it land in a future release.

In the annual report there was a nice surprise; I made the most merge requests this year! I think this is a reflection on the step change in productivity in GNOME since switching to GitLab. So now I have a challenge to maintain that for next year...


If you were unable to attend you can watch the all talks on YouTube. Two talks I'd like to highlight; the first one is by Britt Yazel from the Engagement team. In it he talks about Setting a Positive Voice for GNOME. He talked about how open source communities have a lot of passion - and that has good and bad points. The Internet being as it is can lead to the trolls taking over but we can counter that but highlighting positive messages and showing the people behind GNOME. One of the examples showed how Ubuntu and GNOME have been posting positive messages on their channels about each-other, which is great!


The second talk was by Georges Basile Stavracas Neto and he talked About Maintainers and Contributors. In it he talked about the difficulties of being a maintainer and the impacts of negative feedback. It resonated with Britt's talk in that we need to highlight that maintainers are people who are doing their best! As state in the GNOME Code of Conduct - Assume people mean well (they really do!).


Georges and I are co-maintainers of Settings and we had a productive GUADEC and managed to go through and review all the open merge requests.

There were a number of discussions around Snaps in GNOME. There seemed a lot more interest in Snap technology compared to last GUADEC and it was great to be able to help people better understand them. Work included discussions about portals, better methods of getting the Freedesktop and GNOME stacks snapped, Snap integration in Settings and the GNOME publisher name in the Snap Store.

I hope to be back next year!

Jonathan Riddell: OpenUK Meets the Crumbling of UK Democracy

$
0
0

This week I went to Parliament square in Edinburgh where the highest court of the land, the Court of Session sits.  The court room viewing gallery was full,  concerned citizens there to watch and journalists enjoying the newly allowed ability to post live from the courtroom.  They were waiting for Joanna Cherry, Jo Maugham and the Scottish Government to give legal challenge to the UK Governement not to shut down parliament.  The UK government filed their papers late and didn’t bother completing them missing out the important signed statement from the Prime Minister saying why he had ordered parliament to be shut.  A UK government who claims to care about Scotland but ignores its people, government and courts is not one who can argue it it working for democracy or the union it wants to keep.

Outside I spoke to the assembled vigil gathering there to support, under the statue of Charles II, I said how democracy can’t be shut down but it does need the people to pay constant attention and play their part.

Charles II was King of Scots who led Scots armies that were defeated twice by the English Commonwealth army busy invading neighbouring countries claiming London and it’s English parliament gave them power over us all.  So I went to London to check it out.

In London that parliament is falling down.  Scaffold covers it in an attempt to patch it up.  The protesters outside held a rally where politicians from the debates inside wandered out to give updates as they frantically tried to stop an unelected Prime Minister to take away our freedoms and citizenship.  Comedian Mitch Benn compared it, leading the rally saying he wanted everyone to show their English  flags with pride, the People’s Vote campaign trying to reclaim them from the racists, it worked with the crowd and shows how our politics is changing.

Inside the Westminster Parliament compound, past the armed guards and threatening signs of criminal repercussions the statue of Cromwell stands proud, he invaded Scotland and murdered many Irish, a curious character to celebrate.

The compound is a bubble, the noise of the protesters outside wanting to keep freedoms drowned out as we watched a government lose its majority and the confidence on their faces familiar from years of self entitlement vanish.

Pete Wishart, centre front, is an SNP MP who runs the All Party Intellectual Property group, he invited us in for the launch of OpenUK a new industry body for companies who want to engage with governement for open source solutions.  Too often governement puts out tenders for jobs and won’t talk to providers of open source solutions because we’re too small and the names are obscure.  Too often when governements do implement open source and free software setups they get shut down because someone with more money comes along and offers their setup and some jobs.  I’ve seen that in Nigeria, I’ve seen it happen in Scotland, I’ve seen it happen in Germany.  The power and financial structures that proprietary software create allows for the corruption of best solutions to a problem.

The Scottish independence supporter Pete spoke of the need for Britain to have the best Intellectual Property rules in the world, to a group who want to change how intellectual property influences us, while democracy falls down around us.

The protesters marched over the river closing down central London in the name of freedom but in the bubble of Westminster we sit sipping wine looking on.

The winners of the UK Open Source Awards were celebrated and photos taken, (previously) unsung heros working to keep the free operating system running, opening up how plant phenomics work, improving healthcare in ways that can not be done when closed.

Getting governement engagement with free software is crucial to improving how our society works but the politicians are far too easily swayed by big branding and names budgets rather than making sure barriers are reduced to be invisible.

The crumbling of one democracy alongside a celebration and opening of a project to bring business to those who still have little interest in it.  How to get government to prefer openness over barriers?  This place will need to be rebuilt before that can happen.

Onwards to Milan for KDE Akademy.

 

Ubuntu Podcast from the UK LoCo: S12E22 – Shadow of the Beast

$
0
0

This week we’ve been playing with the GPD WIN 2. We interview Sarah Townson about Science Oxford and making fighting robots, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 22 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Desktop

sudo apt install --install-recommends linux-generic-hwe-18.04 xserver-xorg-hwe-18.04

Server

sudo apt install --install-recommends linux-generic-hwe-18.04
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image taken from Shadow of the Beast published in 1989 for the Amiga by Psygnosis.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Ubuntu Blog: The teleop_tools arrive in ROS 2 Dashing!

$
0
0
Ubuntu & ROS 2

After exploring some ROS 2 subtleties and implementing some CLI tools we felt were missing, the time has come to get our hands even more dirty.

What better way to learn than by doing?

C’est en forgeant qu’on devient forgeron

Humm, pardon my french,

Practice makes perfect

To enter the realm of ROS 2 and discover its wording, its patterns, its colors, we decided it was time to do something a little larger than yet another publisher demo. Don’t get me wrong, these kinds of examples are great and often are a goto when the new things aren’t totally mastered yet. But dealing with an actual package better highlights the various intricacies of the code – how the pieces fit together and possibly blow up in your face.

We will leave covering the differences between ROS 1 and ROS 2 to past and future posts.  Here instead we will advertise a ROS 2 port of a very useful set of ROS 1 tools: the teleop_tools package.

teleop_tools, tools for tele-operation

As its name suggests, the teleop_tools package is a collection of tools for tele-operating a robot. The three main components are mouse_teleop, key_teleop and joy_teleop.

First allow me to briefly motivate why this particular package was chosen for this exercise. First, I personally use at least two of the three tools on a (very) regular basis. They do a simple job but do it well. If you have never used them, give them a look, they are worth it. Especially if you are a ROS 2 user, there aren’t many such tools just yet and after testing these you may wonder why you’d bother with any others. Second, they aren’t overly complex, but their functionality covers a lot of the main aspects of ROS 2: parameters, topics, services, actions, etc. Combining these two facts, this package is an ideal candidate for better learning ROS 2 while bringing something very useful to the community.

mouse_teleop

Is your 12 000 dpi gamer mouse getting dusty? QtCreator isn’t quite the thrill of a FPS game? Mourn no more, for the mouse_teleop package allows you to send twist commands over a topic with your mouse!

How to use,

$ ros2 run mouse_teleop mouse_teleop

The following GUI should appear,

mouse_teleop gui

key_teleop

This package offers a very simple terminal-based interface to send twist commands at the tip of the four arrow keys of a keyboard.

ROS 1 wiki page

How to use,

$ ros2 run key_teleop key_teleop

The following interface should appear in your terminal,

key_teleop interface

joy_teleop

The joy_teleop package is likely the more interesting of the three tools as it offers extended functionality compared to the previous two. Listening to a sensor_msgs/msg/Joy message (e.g. published by a joynode), it supports mapping different actions to each button (or buttons combination) of a remote controller through a configuration file. Mapped actions can be any of the three basic ROS 2 interfaces,

  • publishing to a topic
  • requesting a service
  • sending an action goal

ROS 1 wiki page

A brief example of a joy_teleop configuration file is given below, showcasing each of the three interfaces.

joy_teleop:
 ros__parameters:
   move:
      type: topic
     interface_type:  geometry_msgs/msg/Twist
      topic_name: cmd_vel
      axis_mappings:
        linear-x:
          axis: 1
          scale: 0.5
        angular-z:
          axis: 2
         scale: 0.5

    add_two_ints:
      type: service
      interface_type: example_interfaces/srv/AddTwoInts
      service_name: add_two_ints
      service_request:
        a: 11
        b: 31
     buttons: [10]

    fibonacci:
      type: action
      interface_type: action_tutorials/action/Fibonacci
      action_name: fibonacci
      action_goal:
        order: 5
     buttons: [4, 5, 6, 7]

How to use,

ros2 launch joy_teleop joy_teleop.launch.py

Note that the package provides a configuration file example to get you started.

Conclusion

teleop_toolsjust landed in ROS 2 Dashing, so it’s not available in the ROS Debian repositories just yet. If you’d like to use it now you can always build it from source, but it should be released soon. Overall the experience was successful– I learned more of the ins and outs of ROS 2, and now there’s another incredibly useful set of tools available to the community!

Do you know of any other tools such as teleop_tools that you are deeply missing in your new ROS 2 habits? Please let us know or feel free to share any other feedback and (hopefully not so many) tickets on github!

Ubuntu Blog: Management of snaps in a controlled, enterprise environment

$
0
0
Enterprise management of snaps

Few enterprises want all their computing devices to be fully exposed to the internet. In an environment of ever-growing security threats, isolating internal networks from the wider internet is not simply best practice, but borderline essential.

However, with all the benefits that restricted networks provide, it can pose challenges for enterprises who are looking to take advantage of certain technologies. One of these is the automatic update feature of snaps which enable a low-friction process and a fast release cadence. If an enterprise has a restricted network, then this will prevent snaps being able to automatically update due to the necessity for an external internet connection and potentially upsetting change management policies.

The Snap Store Proxy overcomes these issues and enables enterprises to take full advantage of snaps while maintaining the governance and security control that they require.

In this whitepaper, you will learn:

  • The normal snap update cycle in a non-restricted environment
  • The challenges the Snap Store Proxy can solve for enterprises from those working in regulated industries to those that have strict internal release processes
  • How to take advantage of new technologies such as Kubernetes in a controlled environment

To view the whitepaper, complete the form below:


Simos Xenitellis: How to use the LXD Proxy Device to map ports between the host and the containers

$
0
0

LXD supports proxy devices, which is a way to proxy connections between the host and containers. This includes TCP, UDP and Unix socket connections, in any combination between each other, in any direction. For example, when someone connects to your host on port 80 (http), then this connection can be proxied to a container using a proxy device. In that way, you can isolate your Web server into a LXD container. By using a TCP proxy device, you do not need to use iptables instead.

There are 3³=9 combinations for connections between TCP, UDP and Unix sockets, as follows. Yes, you can proxy, for example, a TCP connection to a Unix socket!

  1. TCP to TCP, for example, to expose a container’s service to the Internet.
  2. TCP to UDP
  3. TCP to Unix socket
  4. UDP to UDP
  5. UDP to TCP
  6. UDP to Unix socket
  7. Unix socket to Unix socket, for example, to share the host’s X11 socket to a container. Or, to make available a host’s Unix socket into the container.
  8. Unix socket to TCP
  9. Unix socket to UDP

Earlier I wrote that you can make a connection in any direction. For example, you can expose the host’s Unix socket for X11 into the container so that the container can run X11 applications and have them appear on the host’s X11 server. Or, in the other way round, you can make available LXD’s Unix socket at the host to a container so that you can manage LXD from inside a container.

Note that LXD 3.0.x only supports TCP to TCP proxy devices. Support for UDP and Unix sockets was added in later versions.

Launching a container and setting up a Web server

Let’s launch a container, install a Web server, and, then expose the Web server to the local network (or the Internet, if you are using a VPS/Internet server).

First, launch the container.

$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

We get a shell into the container, update the package list and install nginx. Finally, verify that nginx is running.

ubuntu@mycontainer:~$ sudo apt update
ubuntu@mycontainer:~$ sudo apt install -y nginx
ubuntu@mycontainer:~$ curl http://localhost
...
 Welcome to nginx! 

Exposing the Web server of a container to the Internet

We logout to the host and verify that there is no Web server already running on port 80. If port 80 is not available on your host, change it to something else, like 8000. Finally, we create the TCP to TCP LXD Proxy Device.

ubuntu@mycontainer:~$ logout
$ lxc config device add mycontainer myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
Device myport80 added to mycontainer

The command that creates the proxy device is made of the following components.

  1. lxc config device add, we configure to have a deviceadded,
  2. mycontainer, to the container mycontainer,
  3. myport80, with name myport80,
  4. proxy, a proxy device, we are adding a LXD Proxy Device.
  5. listen=tcp:0.0.0.0:80, we listen (on the host by default) on all network interfaces on TCP port 80.
  6. connect=tcp:127.0.0.1:80, we connect (to the container by default) to the existing TCP port 80 on localhost, which is our nginx.

Note that previously you would specify hostnames when you were creating LXD Proxy Devices. This is no longer supported (has security implications), therefore you get an error if you specify a hostname such as localhost. This post was primarily written because the top Google result on proxy devices is an old Reddit read-only post that suggests to use localhost.

Let’s test that the Web server in the container is accessible on the host. We can use both localhost (or 127.0.0.1) on the host to access the website of the container. We can also use the public IP address of the host (in this case, the LAN IP address) to access the container.

$ curl http://localhost
...
 Welcome to nginx! 
...
$ curl http://192.168.1.100
...
 Welcome to nginx! 
...

Other features of the proxy devices

By default, a proxy device exposes an existing service in the container to the host. If we need to expose an existing service on the host to a container, we would add the parameter bind=container to the proxy device command.

You can expose a single webserver to a port on the host. But how do you expose many web servers in containers to the host? You can use a reverse proxy that goes in front of the containers. To retain the remote IP address of the clients visiting the Web servers, you can add the proxy_protocol=true to enable support for the PROXY protocol. Note that you also need to enable the PROXY protocol on the reverse proxy.

Jono Bacon: People Powered Voices: Whitney Bouck, COO of HelloSign

$
0
0

A few weeks ago I announced my brand new book, ‘People Powered: How communities can supercharge your business, brand, and teams’, published by HarperCollins Leadership, and available on 12th November 2019.

One of the challenges in writing a book such as ‘People Powered’ is selecting stories, examples, and case studies to include in the book. While I included stories from Joseph Gordon-Levitt, Jim Whitehurst, Ali Velshi, Alexander van Engelen, Jim Zemlin, Peter Diamandis, and others, the world of community building is far more expansive and diverse than these stories alone. There are thousands of examples and learnings out there in incredible communities being built by a broad range of people and backgrounds.

Some of these stories I am featuring on my podcast, Conversations With Bacon (such as Emily Musil Church from XPRIZE and Kate Drane from Techstars), but I also want to share these stories here on my blog too, especially from underrepresented groups across a broad range of industries and experience. There are so many fantastic people out there doing this work and I can use my website and platform to do a better job in helping to share some of these stories.

As such, I am kicking off a series of interviews here on my blog called People Powered Voices. These interviews are designed to augment the stories and examples in the book to provide a more comprehensive set of material for you all to pull from. I am also linking these from the Resources section of my website (which I repeatedly reference in ‘People Powered’ as a source of additional material, templates, and content that expands on the book). This extra material will be available soon.

So, let’s get this party started, and I am really excited about my first interview…

Whitney Bouck, COO of HelloSign

Whitney Bouck is one of the most incredible people I have met in my career. Previously running marketing at EMC and Box, Whit has also advised numerous of companies in her work as an advisor with the YCombinator continuity fund.

I first met Whit at a conference my wife, Erica, and I joined in Hawaii. This conference included a set of attendee-driven discussion sessions where attendees shared their experience on a wide variety of topics related to running businesses.

Whit’s contributions all had a common theme: the importance of unlocking potential in the people inside and outside of a company. As I discovered more about her work, and as we became friends, it has been fascinating to learn about her approach and experience.

The Dropbox and HelloSign Executives

Whit is part of the HelloSign leadership team, which is now a part of Dropbox (when they were acquired back in February). HelloSign still runs as a separate business and her role as COO is to oversee the go-to-market functions (sales, marketing, business development, customer support/success) as well as finance, legal and strategic planning functions.

Importantly though, Whit is deeply involved in how the various communities at HelloSign are shaped.

When I think of the HelloSign community, I think of an onion—multiple layers that build on one another to create a whole that is greater than the sum of the parts.

Like many other organizations, HelloSign has focused a chunk of their community strategy in building a community of integration partners. This enables HelloSign to be integrated tightly into the workflow of their customers. This includes integrations with Google Docs, Zapier, Box, Hubspot, Slack, and others.

No software solution is an island…it must connect with and work with the other systems within a company. By building strong relationships and integrations with our partners, we build an ecosystem to support the ways our customers use our solutions. By leveraging this ecosystem, we’re also able to incorporate our customers’ technologies of choice into our solutions, ensuring we’re providing the best value possible to the customer.

These kinds of integration communities can be powerful, but a discussed in ‘People Powered’, I refer to these as a Collaborator model community, of the Outer type. Building these effectively requires a careful balance of targeted personas, clear developer onramps, documentation, and places for developers to get help.

Unsurprisingly, Whit is also passionate about their customer community.

Customers make up a key layer of our HelloSign community. We invest a lot of time in creating spaces for them to come together to share best practices, tips, and tricks as well as learn about new features that allow them to streamline workflows and have a great experience with our products

The HelloSign Community

While many organizations would stay focused purely on this existing work, HelloSign has also been eager to explore other types of communities, most notably in skills development of their customers beyond their product.

One of the great examples that I’m really proud of was a thought leadership program we ran all of last year called Digital Strength. The goal of the program was to give everyone within an organization, from the C-Suite to individual contributors, the guidance to better understand and achieve digital transformation (that elusive digital nirvana every company is striving for!). Every month, we delivered a different ‘chapter’ with a live webinar, a white paper, a video and much more, giving people the skills and plans needed to accelerate their own digital journey. Naturally, our intent was to not only be helpful, but to bring people together who share the mission of digital transformation— a community focused on a common goal who can not only benefit from our insights but also from each other.

The program was a huge success, with thousands of participants, 15% of whom were from Fortune 2000 companies. The sheer number of sign-ups was a clear indicator that people were looking to learn from shared experiences.

The program ran in 2018, but is still available to anyone who is interested, offering full access to methods and techniques that can be used to create and measure digital transformation projects.

Lessons Learned

I am always eager to get a sense of what key lessons people such as Whit would pass on to my readers who are pursuing an interest or career in community strategy. She started with the importance of building purpose.

It is important to recognize that community for community’s sake will not be successful. A true community is built off of common interests, missions or goals. True communities often form organically; they are not a quick-fix solution to a weak culture or lack of diversity. Shared experiences, interests and motivations are the elements that create a strong, replenishing community—and often are the same elements that pull like-minded individuals together of their own volition and need. Many nonprofits have grown across generations and dynamic social climates because of people’s passion for their cause.

Across these different communities, Whit has also observed something that many successful organizations love about communities: glueing together a network of minds, packed with experience, ideas, and potential.

I’ve also learned that strong communities are self-perpetuating. They build upon themselves. Members draw from each other, starting a domino effect based on shared attitudes, interests or goals. This creates the potential for mass expansion and growth, especially within communities that value shared experiences and learning. Think about social media platforms that seemingly blew up in popularity overnight, or global movements of individuals gathering in like-minded celebration or protest. ]

Finally, Whit shared that any community, be it internal, focused on partners, or a customer community, needs care and feeding.

It is important to remember that communities need to be nurtured. Some of the healthiest communities I have seen or been a part of have a consistent influx of fresh ideas, new contributions and new members, all of which provide ongoing value and new perspectives for everyone in the community.

Thanks, Whit, and keep up the great work!

The post People Powered Voices: Whitney Bouck, COO of HelloSign appeared first on Jono Bacon.

David Tomaschik: Hacker Summer Camp 2019: The DEF CON Data Duplication Village

$
0
0

One last post from Summer Camp this year (it’s been a busy month!) – this one about the “Data Duplication Village” at DEF CON. In addition to talks, the Data Duplication Village offers an opportunity to get your hands on the highest quality hacker bits – that is, copies of somewhere between 15 and 18TB of data spread across 3 6TB hard drives.

I’d been curious about the DDV for a couple of years, but never participated before. I decided to change that when I saw 6TB Ironwolf NAS drives on sale a few weeks before DEF CON. I wasn’t quite sure what to expect, as the description provided by the DDV is a little bit sparse:

6TB drive 1-3: All past convention videos that DT can find - essentially a clone of infocon.org - building on last year’s collection and re-squished with brand new codecs for your size constraining pleasures.

6TB drive 2-3: freerainbowtables hash tables (lanman, mysqlsha1, NTLM) and word lists (1-2)

6TB drive 3-3: freerainbowtables GSM A5/1, md5 hash tables, and software (2-2)

Drive 1-3 seems pretty straightforward, but I spent a lot of time debating if the other two were worth getting. (And, to be honest, I think they’re cool to have, but not sure if I’ll really make good use of them.)

I want to thank the operators of the DDV for their efforts, and also my wife for dropping off and picking up my drives while I was otherwise occupied (work obligations).

It’s worth noting that, as far as I can tell, all of the contents of the drives here is available as a torrent, so you can always get the data that way. On the other hand, torrenting 15.07 TiB (16189363384 KiB to be precise) might not be your cup of tea, especially if you have a mere 75 Mbps internet connection like mine.

If you want a detailed list of the contents of each drive (along with sha256sums), I’ve posted them to Github. If you choose to participate next year, note that your drives must be 7200 RPM SATA drives (apparently several people had to be turned away due to 5400 RPM drives, which slow down the entire cloning process).

Drive 1

Drive 1 really does seem to be a copy of infocon.org, it’s got dozens of conferences archived on it, adding up to a total of 132,253 files. Just to give you a taste, here’s a high-level index:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
./cons
./cons/2600
./cons/44Con
./cons/ACK Security Conference
./cons/ACoD
./cons/AIDE
./cons/ANYCon
./cons/ATT&CKcon
./cons/AVTokyo
./cons/Android Security Symposium
./cons/ArchC0N
./cons/Area41
./cons/AthCon
./cons/AtlSecCon
./cons/AusCERT
./cons/BalCCon
./cons/Black Alps
./cons/Black Hat
./cons/BloomCON
./cons/Blue Hat
./cons/BodyHacking
./cons/Bornhack
./cons/BotConf
./cons/BrrCon
./cons/BruCON
./cons/CERIAS
./cons/CODE BLUE
./cons/COIS
./cons/CONFidence
./cons/COUNTERMEASURE
./cons/CYBERWARCON
./cons/CackalackyCon
./cons/CactusCon
./cons/CarolinaCon
./cons/Chaos Computer Club - Camp
./cons/Chaos Computer Club - Congress
./cons/Chaos Computer Club - CryptoCon
./cons/Chaos Computer Club - Easterhegg
./cons/Chaos Computer Club - SigInt
./cons/CharruaCon
./cons/CircleCityCon
./cons/ConVerge
./cons/CornCon
./cons/CrikeyCon
./cons/CyCon
./cons/CypherCon
./cons/DEF CON
./cons/DakotaCon
./cons/DeepSec
./cons/DefCamp
./cons/DerbyCon
./cons/DevSecCon
./cons/Disobey
./cons/DojoCon
./cons/DragonJAR
./cons/Ekoparty
./cons/Electromagnetic Field
./cons/FOSDEM
./cons/FSec
./cons/GreHack
./cons/GrrCON
./cons/HCPP
./cons/HITCON
./cons/Hack In Paris
./cons/Hack In The Box
./cons/Hack In The Random
./cons/Hack.lu
./cons/Hack3rcon
./cons/HackInBo
./cons/HackWest
./cons/Hackaday
./cons/Hacker Hotel
./cons/Hackers 2 Hackers Conference
./cons/Hackers At Large
./cons/Hackfest
./cons/Hacking At Random
./cons/Hackito Ergo Sum
./cons/Hacks In Taiwan
./cons/Hacktivity
./cons/Hash Days
./cons/HouSecCon
./cons/ICANN
./cons/IEEE Security and Privacy
./cons/IETF
./cons/IRISSCERT
./cons/Infiltrate
./cons/InfoWarCon
./cons/Insomnihack
./cons/KazHackStan
./cons/KiwiCon
./cons/LASCON
./cons/LASER
./cons/LangSec
./cons/LayerOne
./cons/LevelUp
./cons/LocoMocoSec
./cons/Louisville Metro InfoSec
./cons/MISP Summit
./cons/NANOG
./cons/NoNameCon
./cons/NolaCon
./cons/NorthSec
./cons/NotACon
./cons/NotPinkCon
./cons/Nuit Du Hack
./cons/NullCon
./cons/O'Reilly Security
./cons/OISF
./cons/OPCDE
./cons/OURSA
./cons/OWASP
./cons/Observe Hack Make
./cons/OffensiveCon
./cons/OzSecCon
./cons/PETS
./cons/PH-Neutral
./cons/Pacific Hackers
./cons/PasswordsCon
./cons/PhreakNIC
./cons/Positive Hack Days
./cons/Privacy Camp
./cons/QuahogCon
./cons/REcon
./cons/ROMHACK
./cons/RSA
./cons/RVAsec
./cons/Real World Crypto
./cons/RightsCon
./cons/RoadSec
./cons/Rooted CON
./cons/Rubicon
./cons/RuhrSec
./cons/RuxCon
./cons/S4
./cons/SANS
./cons/SEC-T
./cons/SHA2017
./cons/SIRAcon
./cons/SOURCE
./cons/SaintCon
./cons/SecTor
./cons/SecureWV
./cons/Securi-Tay
./cons/Security BSides
./cons/Security Fest
./cons/Security Onion
./cons/Security PWNing
./cons/Shakacon
./cons/ShellCon
./cons/ShmooCon
./cons/ShowMeCon
./cons/SkyDogCon
./cons/SteelCon
./cons/SummerCon
./cons/SyScan
./cons/THREAT CON
./cons/TROOPERS
./cons/TakeDownCon
./cons/Texas Cyber Summit
./cons/TheIACR
./cons/TheLongCon
./cons/TheSAS
./cons/Thotcon
./cons/Toorcon
./cons/TrustyCon
./cons/USENIX ATC
./cons/USENIX Enigma
./cons/USENIX Security
./cons/USENIX WOOT
./cons/Unrestcon
./cons/Virus Bulletin
./cons/WAHCKon
./cons/What The Hack
./cons/Wild West Hackin Fest
./cons/You Shot The Sheriff
./cons/Zero Day Con
./cons/ZeroNights
./cons/c0c0n
./cons/eth0
./cons/hardware.io
./cons/outerz0ne
./cons/r00tz Asylum
./cons/r2con
./cons/rootc0n
./cons/t2 infosec
./cons/x33fcon
./documentaries
./documentaries/Hacker Movies
./documentaries/Hacking Documentaries
./documentaries/Other
./documentaries/Pirate Documentary
./documentaries/Tech Documentary
./documentaries/Tools
./infocon.jpg
./mirrors
./mirrors/cryptome.org-July-2019.rar
./mirrors/gutenberg-15-July-2019.net.au.rar
./rainbow tables
./rainbow tables/## READ ME RAINBOW TABLES ##.txt
./rainbow tables/rainbow table software
./skills
./skills/Lock Picking
./skills/MAKE

Drive 2

Drive 2 contains the promised rainbow tables (lanman, ntlm, and mysqlsha1) as well as a bunch of wordlists. I actually wonder how a 128GB wordlist would compare to applying rules to something like rockyou – bigger is not always better, and often, you want high yield unless you’re trying to crack something obscure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
./lanman
./lanman/lm_all-space#1-7_0
./lanman/lm_all-space#1-7_1
./lanman/lm_all-space#1-7_2
./lanman/lm_all-space#1-7_3
./lanman/lm_lm-frt-cp437-850#1-7_0
./lanman/lm_lm-frt-cp437-850#1-7_1
./lanman/lm_lm-frt-cp437-850#1-7_2
./lanman/lm_lm-frt-cp437-850#1-7_3
./mysqlsha1
./mysqlsha1/mysqlsha1_loweralpha#1-10_0
./mysqlsha1/mysqlsha1_loweralpha#1-10_1
./mysqlsha1/mysqlsha1_loweralpha#1-10_2
./mysqlsha1/mysqlsha1_loweralpha#1-10_3
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_0
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_16
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_24
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_8
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_3
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_3
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_3
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_3
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_0
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_1
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_2
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_3
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_0
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_1
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_2
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_3
./mysqlsha1/mysqlsha1_numeric#1-12_0
./mysqlsha1/mysqlsha1_numeric#1-12_1
./mysqlsha1/mysqlsha1_numeric#1-12_2
./mysqlsha1/mysqlsha1_numeric#1-12_3
./mysqlsha1/rainbow table software
./ntlm
./ntlm/ntlm_alpha-space#1-9_0
./ntlm/ntlm_alpha-space#1-9_1
./ntlm/ntlm_alpha-space#1-9_2
./ntlm/ntlm_alpha-space#1-9_3
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_0
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_1
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_2
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_3
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_0
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_1
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_2
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_3
./ntlm/ntlm_loweralpha-numeric#1-10_0
./ntlm/ntlm_loweralpha-numeric#1-10_16
./ntlm/ntlm_loweralpha-numeric#1-10_24
./ntlm/ntlm_loweralpha-numeric#1-10_8
./ntlm/ntlm_loweralpha-numeric-space#1-8_0
./ntlm/ntlm_loweralpha-numeric-space#1-8_1
./ntlm/ntlm_loweralpha-numeric-space#1-8_2
./ntlm/ntlm_loweralpha-numeric-space#1-8_3
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_0
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_1
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_2
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_3
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_0
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_1
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_2
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_3
./ntlm/ntlm_loweralpha-space#1-9_0
./ntlm/ntlm_loweralpha-space#1-9_1
./ntlm/ntlm_loweralpha-space#1-9_2
./ntlm/ntlm_loweralpha-space#1-9_3
./ntlm/ntlm_mixalpha-numeric#1-8_0
./ntlm/ntlm_mixalpha-numeric#1-8_1
./ntlm/ntlm_mixalpha-numeric#1-8_2
./ntlm/ntlm_mixalpha-numeric#1-8_3
./ntlm/ntlm_mixalpha-numeric#1-9_0
./ntlm/ntlm_mixalpha-numeric#1-9_16
./ntlm/ntlm_mixalpha-numeric#1-9_32
./ntlm/ntlm_mixalpha-numeric#1-9_48
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_0
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_1
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_2
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_3
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_0
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_16
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_24
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_32
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_8
./ntlm/ntlm_mixalpha-numeric-space#1-7_0
./ntlm/ntlm_mixalpha-numeric-space#1-7_1
./ntlm/ntlm_mixalpha-numeric-space#1-7_2
./ntlm/ntlm_mixalpha-numeric-space#1-7_3
./ntlm/rainbow table software
./rainbow table software
./rainbow table software/Free Rainbow Tables » Distributed Rainbow Table Generation » LM, NTLM, MD5, SHA1, HALFLMCHALL, MSCACHE.mht
./rainbow table software/converti2_0.3_src.7z
./rainbow table software/converti2_0.3_win32_mingw.7z
./rainbow table software/converti2_0.3_win32_vc.7z
./rainbow table software/converti2_0.3_win64_mingw.7z
./rainbow table software/converti2_0.3_win64_vc.7z
./rainbow table software/rcracki_mt_0.7.0_linux_x86_64.7z
./rainbow table software/rcracki_mt_0.7.0_src.7z
./rainbow table software/rcracki_mt_0.7.0_win32_mingw.7z
./rainbow table software/rcracki_mt_0.7.0_win32_vc.7z
./rainbow table software/rti2formatspec.pdf
./rainbow table software/rti2rto_0.3_beta2_win32_vc.7z
./rainbow table software/rti2rto_0.3_beta2_win64_vc.7z
./rainbow table software/rti2rto_0.3_src.7z
./rainbow table software/rti2rto_0.3_win32_mingw.7z
./rainbow table software/rti2rto_0.3_win64_mingw.7z
./word lists
./word lists/SecLists-master.rar
./word lists/WPA-PSK WORDLIST 3 Final (13 GB).rar
./word lists/Word Lists archive - infocon.org.torrent
./word lists/crackstation-human-only.txt.rar
./word lists/crackstation.realuniq.rar
./word lists/fbnames.rar
./word lists/human0id word lists.rar
./word lists/openlibrary_wordlist.rar
./word lists/pwgen.rar
./word lists/pwned-passwords-2.0.txt.rar
./word lists/pwned-passwords-ordered-2.0.rar
./word lists/xsukax 128GB word list all 2017 Oct.7z

Drive 3

Drive 3 contains more rainbow tables, this time for A5-1 (GSM encryption), and extensive tables for MD5. It appears to contain the same software and wordlists as Drive 2.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
./A51
./A51 rainbow tables - infocon.org.torrent
./A51/Decoding-Gsm.pdf
./A51/a51_table_100.dlt
./A51/a51_table_108.dlt
./A51/a51_table_116.dlt
./A51/a51_table_124.dlt
./A51/a51_table_132.dlt
./A51/a51_table_140.dlt
./A51/a51_table_148.dlt
./A51/a51_table_156.dlt
./A51/a51_table_164.dlt
./A51/a51_table_172.dlt
./A51/a51_table_180.dlt
./A51/a51_table_188.dlt
./A51/a51_table_196.dlt
./A51/a51_table_204.dlt
./A51/a51_table_212.dlt
./A51/a51_table_220.dlt
./A51/a51_table_230.dlt
./A51/a51_table_238.dlt
./A51/a51_table_250.dlt
./A51/a51_table_260.dlt
./A51/a51_table_268.dlt
./A51/a51_table_276.dlt
./A51/a51_table_292.dlt
./A51/a51_table_324.dlt
./A51/a51_table_332.dlt
./A51/a51_table_340.dlt
./A51/a51_table_348.dlt
./A51/a51_table_356.dlt
./A51/a51_table_364.dlt
./A51/a51_table_372.dlt
./A51/a51_table_380.dlt
./A51/a51_table_388.dlt
./A51/a51_table_396.dlt
./A51/a51_table_404.dlt
./A51/a51_table_412.dlt
./A51/a51_table_420.dlt
./A51/a51_table_428.dlt
./A51/a51_table_436.dlt
./A51/a51_table_492.dlt
./A51/a51_table_500.dlt
./A51/rainbow table software
./LANMAN rainbow tables - infocon.org.torrent
./MD5 rainbow tables - infocon.org.torrent
./MySQL SHA-1 rainbow tables - infocon.org.torrent
./NTLM rainbow tables - infocon.org.torrent
./md5
./md5/md5_alpha-space#1-9_0
./md5/md5_alpha-space#1-9_1
./md5/md5_alpha-space#1-9_2
./md5/md5_alpha-space#1-9_3
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_0
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_1
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_2
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_3
./md5/md5_loweralpha#1-10_0
./md5/md5_loweralpha#1-10_1
./md5/md5_loweralpha#1-10_2
./md5/md5_loweralpha#1-10_3
./md5/md5_loweralpha-numeric#1-10_0
./md5/md5_loweralpha-numeric#1-10_16
./md5/md5_loweralpha-numeric#1-10_24
./md5/md5_loweralpha-numeric#1-10_8
./md5/md5_loweralpha-numeric-space#1-8_0
./md5/md5_loweralpha-numeric-space#1-8_1
./md5/md5_loweralpha-numeric-space#1-8_2
./md5/md5_loweralpha-numeric-space#1-8_3
./md5/md5_loweralpha-numeric-space#1-9_0
./md5/md5_loweralpha-numeric-space#1-9_1
./md5/md5_loweralpha-numeric-space#1-9_2
./md5/md5_loweralpha-numeric-space#1-9_3
./md5/md5_loweralpha-numeric-symbol32-space#1-7_0
./md5/md5_loweralpha-numeric-symbol32-space#1-7_1
./md5/md5_loweralpha-numeric-symbol32-space#1-7_2
./md5/md5_loweralpha-numeric-symbol32-space#1-7_3
./md5/md5_loweralpha-numeric-symbol32-space#1-8_0
./md5/md5_loweralpha-numeric-symbol32-space#1-8_1
./md5/md5_loweralpha-numeric-symbol32-space#1-8_2
./md5/md5_loweralpha-numeric-symbol32-space#1-8_3
./md5/md5_loweralpha-space#1-9_0
./md5/md5_loweralpha-space#1-9_1
./md5/md5_loweralpha-space#1-9_2
./md5/md5_loweralpha-space#1-9_3
./md5/md5_mixalpha-numeric#1-9_0
./md5/md5_mixalpha-numeric#1-9_0-complete
./md5/md5_mixalpha-numeric#1-9_16
./md5/md5_mixalpha-numeric#1-9_32
./md5/md5_mixalpha-numeric#1-9_48
./md5/md5_mixalpha-numeric-all-space#1-7_0
./md5/md5_mixalpha-numeric-all-space#1-7_1
./md5/md5_mixalpha-numeric-all-space#1-7_2
./md5/md5_mixalpha-numeric-all-space#1-7_3
./md5/md5_mixalpha-numeric-all-space#1-8_0
./md5/md5_mixalpha-numeric-all-space#1-8_16
./md5/md5_mixalpha-numeric-all-space#1-8_24
./md5/md5_mixalpha-numeric-all-space#1-8_32
./md5/md5_mixalpha-numeric-all-space#1-8_8
./md5/md5_mixalpha-numeric-space#1-7_0
./md5/md5_mixalpha-numeric-space#1-7_1
./md5/md5_mixalpha-numeric-space#1-7_2
./md5/md5_mixalpha-numeric-space#1-7_3
./md5/md5_mixalpha-numeric-space#1-8_0
./md5/md5_mixalpha-numeric-space#1-8_1
./md5/md5_mixalpha-numeric-space#1-8_2
./md5/md5_mixalpha-numeric-space#1-8_3
./md5/md5_numeric#1-14_0
./md5/md5_numeric#1-14_1
./md5/md5_numeric#1-14_2
./md5/md5_numeric#1-14_3
./rainbow table software
./rainbow table software/Free Rainbow Tables » Distributed Rainbow Table Generation » LM, NTLM, MD5, SHA1, HALFLMCHALL, MSCACHE.mht
./rainbow table software/converti2_0.3_src.7z
./rainbow table software/converti2_0.3_win32_mingw.7z
./rainbow table software/converti2_0.3_win32_vc.7z
./rainbow table software/converti2_0.3_win64_mingw.7z
./rainbow table software/converti2_0.3_win64_vc.7z
./rainbow table software/rcracki_mt_0.7.0_linux_x86_64.7z
./rainbow table software/rcracki_mt_0.7.0_src.7z
./rainbow table software/rcracki_mt_0.7.0_win32_mingw.7z
./rainbow table software/rcracki_mt_0.7.0_win32_vc.7z
./rainbow table software/rti2formatspec.pdf
./rainbow table software/rti2rto_0.3_beta2_win32_vc.7z
./rainbow table software/rti2rto_0.3_beta2_win64_vc.7z
./rainbow table software/rti2rto_0.3_src.7z
./rainbow table software/rti2rto_0.3_win32_mingw.7z
./rainbow table software/rti2rto_0.3_win64_mingw.7z
./word lists
./word lists/SecLists-master.rar
./word lists/WPA-PSK WORDLIST 3 Final (13 GB).rar
./word lists/Word Lists archive - infocon.org.torrent
./word lists/crackstation-human-only.txt.rar
./word lists/crackstation.realuniq.rar
./word lists/fbnames.rar
./word lists/human0id word lists.rar
./word lists/openlibrary_wordlist.rar
./word lists/pwgen.rar
./word lists/pwned-passwords-2.0.txt.rar
./word lists/pwned-passwords-ordered-2.0.rar
./word lists/xsukax 128GB word list all 2017 Oct.7z

Colin King: Boot speed improvements for Ubuntu 19.10 Eoan Ermine

$
0
0
The early boot requires loading and decompressing the kernel and initramfs from the boot storage device.   This speed is dependent on several factors, speed of loading an image from the boot device, the CPU and memory/cache speed for decompression and the compression type.

Generally speaking, the smallest (best) compression takes longer to decompress due to the extra complexity in the compression algorithm.  Thus we have a trade-off between load time vs decompression time.

For slow rotational media (such as a 5400 RPM HDD) with a slow CPU the loading time can be the dominant factor.  For faster devices (such as a SSD) with a slow CPU, decompression time may be the dominate factor.  For devices with fast 7200-10000 RPM HDDs with fast CPUs, the time to seek to the data starts to dominate the load time, so load times for different compressed kernel sizes is only slightly different in load time.

The Ubuntu kernel team ran several experiments benchmarking several x86 configurations using the x86 TSC (Time Stamp Counter) to measure kernel load and decompression time for 6 different compression types: BZIP2, GZIP, LZ4, LZMA, LZMO and XZ.  BZIP2, LZMA and XZ are slow to decompress so they got ruled out very quickly from further tests.

In compression size, GZIP produces the smallest compressed kernel size, followed by LZO (~16% larger) and LZ4 (~25% larger).  With decompression time, LZ4 is over 7 times faster than GZIP, and LZO being ~1.25 times faster then GZIP on x86.

In absolute wall-clock times, the following kernel load and decompress results were observed:

Lenovo x220 laptop, 5400 RPM HDD:
  LZ4 best, 0.24s faster than the GZIP total time of 1.57s

Lenovo x220 laptop, SSD:
  LZ4 best, 0.29s faster than the GZIP total time of 0.87s

Xeon 8 thread desktop with 7200 RPM HDD:
  LZ4 best, 0.05s faster than the GZIP total time of 0.32s

VM on a Xeon 8 thread desktop host with SSD RAID ZFD backing store:
  LZ4 best, 0.05s faster than the GZIP total time of 0.24s

Even with slow spinning media and a slow CPU, the longer load time of the LZ4 kernel is overcome by the far faster decompression time. As media gets faster, the load time difference between GZIP, LZ4 and LZO diminishes and the decompression time becomes the dominant speed factor with LZ4 the clear winner.

For Ubuntu 19.10 Eoan Ermine, LZ4 will be the default decompression for x86, ppc64el and s390 kernels and for the initramfs too.

References:
Analysis: https://kernel.ubuntu.com/~cking/boot-speed-eoan-5.3/kernel-compression-method.txt
Data: https://kernel.ubuntu.com/~cking/boot-speed-eoan-5.3/boot-speed-compression-5.3-rc4.ods

Ubuntu Blog: Machine Learning Operations (MLOps): Deploy at Scale

$
0
0
What do successful deployments have in common?

Artificial Intelligence and Machine Learning adoption in the enterprise is exploding from Silicon Valley to Wall Street with diverse use cases ranging from the analysis of customer behaviour and purchase cycles to diagnosing medical conditions.

Following on from our webinar ‘Getting started with AI’, this webinar will dive into what success looks like when deploying machine learning models, including training, at scale. The key topics are:

  • Automatic Workflow Orchestration
  • ML Pipeline development
  • Kubernetes / Kubeflow Integration
  • On-device Machine Learning, Edge Inference and Model Federation
  • On-prem to cloud, on-demand extensibility
  • Scale-out model serving and inference

This webinar will detail recent advancements in these areas alongside providing actionable insights for viewers to apply to their AI/ML efforts!

Watch the webinar

The Fridge: Ubuntu Weekly Newsletter Issue 595

$
0
0

Welcome to the Ubuntu Weekly Newsletter, Issue 595 for the week of September 1 – 7, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Benjamin Mako Hill: How Discord moderators build innovative solutions to problems of scale with the past as a guide

$
0
0

Introducing new technology into a work place is often disruptive, but what if your work was also completely mediated by technology? This is exactly the case for the teams of volunteer moderatorswho work to regulate content and protect online communities from harm. What happens when the social media platforms these communities rely on change completely? How do moderation teams overcome the challenges caused by new technological environments? How do they do so while managing a “brand new” community with tens of thousands of users?

For a new study that will be published in CSCW in November, we interviewed 14 moderators of 8 “subreddit” communities from the social media aggregation and discussion platform Reddit to answer these questions. We chose these communities because each community had recently adopted the real-time chat platform Discord to support real-time chat in their community. This expansion into Discord introduced a range of challenges—especially for the moderation teams of large communities.

We found that moderation teams of large communities improvised their own creative solutions to challenges they faced by building bots on top of Discord’s API. This was not too shocking given that APIs and bots are frequently cited as tools that allow innovation and experimentation when scaling up digital work. What did surprise us, however, was how important moderators’ past experiences were in guiding the way they used bots. In the largest communities that faced the biggest challenges, moderators relied on bots to reproduce the tools they had used on Reddit. The moderators would often go so far as to give their bots the names of moderator tools available on Reddit. Our findings suggest that support for user-driven innovation is important not only in that it allows users to explore new technological possibilities but also in that it allows users to mine their past experiences to introduce old systems into new environments.

What Challenges Emerged in Discord?

Discord’s text channels allow for more natural, in the moment conversations compared to Reddit. In Discord, this social aspect also made moderation work much more difficult. One moderator explained:

“It’s kind of rough because if you miss it, it’s really hard to go back to something that happened eight hours ago and the conversation moved on and be like ‘hey, don’t do that.’ ”

Moderators we spoke to found that the work of managing their communities was made even more difficult by their community’s size:

On the day to day of running 65,000 people, it’s literally like running a small city…We have people that are actively online and chatting that are larger than a city…So it’s like, that’s a lot to actually keep track of and run and manage.”

The moderators of large communities repeatedly told us that the tools provided to moderators on Discord were insufficient. For example, they pointed out tools like Discord’s Audit Log was inadequate for keeping track of the tens of thousands of members of their communities. Discord also lacks automated moderation tools like the Reddit’s Automoderator and Modmail leaving moderators on Discord with few tools to scale their work and manage communications with community members. 

How Did Moderation Teams Overcome These Challenges?

The moderation teams we talked with adapted to these challenges through innovative uses of Discord’s API toolkit. Like many social media platforms, Discord offers a public API where users can develop apps that interact with the platform through a Discord “bot.” We found that these bots play a critical role in helping moderation teams manage Discord communities with large populations.

Guided by their experience with using tools like Automoderator on Reddit, moderators working on Discord built bots with similar functionality to solve the problems associated with scaled content and Discord’s fast-paced chat affordances. This bots would search for regular expressions and URLs that go against the community’s rules:

“It makes it so that rather than having to watch every single channel all of the time for this sort of thing or rely on users to tell us when someone is basically running amuck, posting derogatory terms and terrible things that Discord wouldn’t catch itself…so it makes it that we don’t have to watch every channel.”

Bots were also used to replace Discord’s Audit Log feature with what moderators referred to often as “Mod logs”—another term borrowed from Reddit. Moderators will send commands to a bot like “!warn username” to store information such as when a member of their community has been warned for breaking a rule and automatically store this information in a private text channel in Discord. This information helps organize information about community members, and it can be instantly recalled with another command to the bot to help inform future moderation actions against other community members.

Finally, moderators also used Discord’s API to develop bots that functioned virtually identically to Reddit’s Modmail tool. Moderators are limited in their availability to answer questions from members of their community, but tools like the “Modmail” helps moderation teams manage this problem by mediating communication to community members with a bot:

“So instead of having somebody DM a moderator specifically and then having to talk…indirectly with the team, a [text] channel is made for that specific question and everybody can see that and comment on that. And then whoever’s online responds to the community member through the bot, but everybody else is able to see what is being responded.”

The tools created with Discord’s API — customizable automated content moderation, Mod logs, and a Modmail system — all resembled moderation tools on Reddit. They even bear their names! Over and over, we found that moderation teams essentially created and used bots to transform aspects of Discord, like text channels into Mod logs and Mod Mail, to resemble the same tools they were using to moderate their communities on Reddit. 

What Does This Mean for Online Communities?

We think that the experience of moderators we interviewed points to a potentially important underlooked source of value for groups navigating technological change: the potent combination of users’ past experience combined with their ability to redesign and reconfigure their technological environments. Our work suggests the value of innovation platforms like APIs and bots is not only that they allow the discovery of “new” things. Our work suggests that these systems value also flows from the fact that they allow the re-creation of the the things that communities already know can solve their problems and that they already know how to use.


Both this blog post and the paper it describes are collaborative work by Charles Kiene, Jialun “Aaron” Jiang, and Benjamin Mako Hill. For more details, check out check out the full 23 page paper. The work will be presented in Austin, Texas at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW’19) in November 2019. The work was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). If you have questions or comments about this study, contact Charles Kiene at ckiene [at] uw [dot] edu.


Full Circle Magazine: Full Circle Weekly News #144

$
0
0

Default Ubuntu Yaru Theme Rebased on Adwaita 3.32

https://www.linuxuprising.com/2019/08/default-ubuntu-yaru-theme-rebased-on.html

Announcing the EPEL 8.0 Official Release

http://smoogespace.blogspot.com/2019/08/announcing-epel-80-official-release.html

Mozilla Revamps Firefox’s HTTPS Address Bar Information

https://www.ghacks.net/2019/08/13/mozilla-revamps-firefoxs-https-address-bar-information/

XFCE 4.14 Desktop Officially Released

https://www.omgubuntu.co.uk/2019/08/xfce-4-14

Credits:

Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic

https://creativecommons.org/licenses/by/4.0/

Jono Bacon: Jeff Atwood on Discourse, Stack Overflow, and Building Online Community Platforms

$
0
0

Building collaborative online platforms is hard. To make a platform that is truly compelling, and rewards the right kind of behavior and teamwork, requires a careful balance of effective design, workflow, and understanding the psychology of how people work together.

Jeff Atwood has an enormous amount of experience doing precisely this. Not only was he the co-founder of Stack Overflow (and later Stack Exchange), but he is also the founder of Discourse, an enormously popular Open Source platform for online discussions.

In this episode of Conversations With Bacon we get into the evolution of online communities, how they have grown, and Jeff’s approach to the design and structure of the systems he has worked on. We delve into Slack vs. forums (and where they are most appropriately used), how Discourse has designed a platform where capabilities are earned, different cultural approaches to communication, and much more.

There is so much insight in this discussion from Jeff, and it is well worth a listen.

Oh, and by the way, Jeff endorsed my new book ‘People Powered: How communities can supercharge your business, brand, and teams’. Be sure to check it out!

The post Jeff Atwood on Discourse, Stack Overflow, and Building Online Community Platforms appeared first on Jono Bacon.

Ubuntu Blog: Hardware discovery and kernel auto-configuration in MAAS

$
0
0

In this blog, we are going to explore how to leverage MAAS for hardware discovery and kernel auto-configuration using tags.

In many cases, certain pieces of hardware require extra kernel parameters to be set in order to make use of them. For example, when configuring GPU passthrough we will typically need to configure the GPU card with specific kernel parameters. To achieve this, we will rely on MAAS’ hardware discovery, Xpath expressions and machine tags.

Tags, XPath expressions and kernel parameters

Machine tags is a mechanism used in MAAS to easily identify machines. While tags can be manually assigned to machines, they can also be automatically assigned if those machines match a specific pattern – the XPath expression – which describes the location of an element or an attribute in an XML document.

When commissioning a machine, MAAS gathers the lshw output (in XML) which lists all the information about the attached hardware. When creating a tag, MAAS allows to provide the XPath definition. This definition is then matched to the gathered lshw information. If this matches, the tag will be applied to all of the commissioned machines.

Similarly, when creating a tag one can specify which kernel parameters to apply to the machine by assigning the tag. Combining the definition and the kernel options in the single tag creation will allow MAAS to automatically discover all machines that match the XPath expression and automatically apply the kernel parameters once this machine is deployed. The following demonstrates the base command to use.

$ maas <username> tags create \
    definition=’<XPath expression>’ \
    kernel_opts=’<Kernel parameters>’

A practical example

As a practical example, we want to configure GPU passthrough. For this, we want to create a tag that automatically matches all machines with Intel VT-d enabled and have a Tesla v100 PCIe 16GB GPU. We do so by using a definition similar to:

definition='//node[@id="cpu:0"]/capabilities/capability/@id = "vmx" and //node[@id="display"]/vendor[contains(.,"NVIDIA")] and //node[@id="display"]/description[contains(.,"3D")] and //node[@id="display"]/product[contains(.,"Tesla V100 PCIe 16GB")]'

But since we want this to be configured at deployment time, we want to set the kernel parameters to apply on a deployed machine:

kernel_opts="nomodeset modprobe.blacklist=nouveau,nvidiafb,snd_hda_intel nouveau.blacklist=1 nouveau.blacklist=1 nouveau.blacklist=1 video=vesafb:off,efifb:off intel_iommu=on rd.driver.pre=pci-stub rd.driver.pre=vfio-pci pci-stub.ids=10de:1db4 vfio-pci.ids=10de:1db4 vfio_iommu_type1.allow_unsafe_interrupts=1 vfio-pci.disable_vga=1"

These kernel parameters will:

  • Blacklist drivers and disable displays
  • Enable IOMMU 
  • Pre-load kernel modules
  • And reserve PCI ID (10de:1db4) for GPU Passthrough

As such, creating a tag that will auto-apply to all machines that match the hardware definition and apply kernel parameters at deployment time will look like this:

$ maas <username> tags create name=gpgpu-tesla-vi \
     comment="Enable passthrough for Nvidia Tesla V series GPUs 
              on Intel" \
     definition='
         //node[@id="cpu:0"]/capabilities/capability/@id = "vmx" 
        and //node[@id="display"]/vendor[contains(.,"NVIDIA")] 
        and //node[@id="display"]/description[contains(.,"3D")] 
        and //node[@id="display"]/product[contains(.,"Tesla V100 
        PCIe 16GB")]' \
     kernel_opts="console=tty0 console=ttyS0,115200n8r nomodeset 
          modprobe.blacklist=nouveau,nvidiafb,snd_hda_intel 
          nouveau.blacklist=1 nouveau.blacklist=1 
          nouveau.blacklist=1 video=vesafb:off,efifb:off 
          intel_iommu=on rd.driver.pre=pci-stub 
          rd.driver.pre=vfio-pci pci-stub.ids=10de:1db4
          vfio-pci.ids=10de:1db4 
          vfio_iommu_type1.allow_unsafe_interrupts=1
          vfio-pci.disable_vga=1"

Once this tag is created, every time a new machine is commissioned MAAS will automatically apply this tag if machines match the definition, allowing administrators to configure their homogeneous hardware at scale by simply defining a few set of tags.

For more information, please contact us or visit https://maas.io/docs/tags .

Kubuntu General News: Kubuntu Meets at Milan Akademy 2019

$
0
0

A few Kubuntu Members (and Councillors!) met Thursday before KDE Akademy’s end. We discussed the coming release (will be 19.10) and the upcoming LTS (20.10) – which will be Plasma LTS *and* Qt LTS. This combination will make this LTS super-supported and stable.

We also discussed snaps and when Ubuntu possibly moves to “all snaps all the time” for applications at least. This may be in our future, so it is worth thinking and discussing.

Tobias Fischbach came by the BOF and told us about Limux which is based on Kubuntu. This has been the official computer distribution of Munich for the past few years. Now however, unless the Mayor changes (or changes his mind) the city is moving to Windows again, which will be unfortunate for the City.

Slightly off-topic but relevent is that KDE neon will be moving to 20.04 base soon after release, but they will not stay on the Plasma LTS or Qt LTS. So users who want the very latest in KDE Plasma and applications will continue to have the option of using Neon, while our users, who expect more testing and stability can choose between the LTS for the ultimate in stability and our interim releases for newer Plasma and applications.

Of course we continue to ask for those of our users who want to help the Kubuntu project to volunteer, especially to test. We’ll soon need testers for the upcoming Eoan, which will become 19.10. Drop into the development IRC channel: #kubuntu-devel on freenode, or subscribe to the Kubuntu Development list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

Ubuntu Podcast from the UK LoCo: S12E23 – Wing Commander

$
0
0

This week we’ve been playing Pillars of Eternity. We discuss boot speed improvements for Ubuntu 19.10, using LXD to map ports, NVIDIA Prime Renderer switching, changes in the Yaru theme and the Librem 5 shipping (perhaps). We also round up some events and some news from the tech world.

It’s Season 12 Episode 23 of the Ubuntu Podcast! Alan Pope and Mark Johnson are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Viewing all 17727 articles
Browse latest View live