pre-release: linux.conf.au meeting announcement

Please take a moment to review your details and reply with OK or edits.
Subject and below is what will go out and also will be used to title the videos.

Subject: 
ANN: linux.conf.au at Room 6 Tue January 14, 10:45p


linux.conf.au
=========================
When: 10:45 AM Tuesday January 14, 2020
Where: Room 6

https://lca2020.linux.org.au/schedule

Topics
------
1. Facebook, Dynamite, Uber, Bombs, and You
Lana Brindley
tags: Security, Identity, Privacy
Consider these two cases: Volkswagen was caught out having written software code that allowed their cars to cheat emissions tests. Uber also developed software (called 'greyball') which allowed them to cheat law enforcement officials trying to crack down on ride-sharing. The difference between them is that Volkswagen software engineers went to jail, and Uber software engineers didn't. Why? Because one is a car company, and one is a software company.

Most industries have had what we might call an "oh no" moment. It's those moments that encourage industries to become better regulated, in order to prevent further disasters. The IT industry has had many moments that could be considered consequential enough to encourage better regulation, but the changes have never been made. Because the industry has avoided effective regulation for so long, it is possible that we are hurtling towards a disaster of epic proportions, one that we haven't even managed to conceive of yet.

In this talk, I will go through some historical examples of disasters leading to regulation in other industries, and the measures that were put into place to mitigate the problem. I will also address some of the major moments from the IT industry that should have prompted regulation, and haven't. Finally, I will discuss ways that IT professionals can blow the whistle on potential disasters before they happen ... without losing your job!
 recording release: yes license: CC BY  

2. Evolution of Linux Containers to Container Native Storage...
sameer kandarkar
tags: Containers
The focus of this session will be on the journey of the containerization technology since 1974 (The First Container) to the container native storage. 

This Session will cover 
1. The Container Technology Stack (chrootfs, namespaces, cgroups, LSM)
2. Opensource container orchestration tools like Kubernetes, docker swarm and openshift.
3. Software defined storage with containers (CNS: Container Native Storage), Storage for and in Containers. [Ceph and Gluster]
4. Introduction to Heketi and Rook [Storage Orchestrators]
5. Container Storage Interface architecture.

For the takeaway for the audience, I will be showing a demo of 'how you can create persistent storage with the help of SDS and an orchestration tool. The audience will get an idea of what is the need of the persistent storage for containers and how can we achieve that. I have created and included some animations for my presentation that will help me to keep it simple & make the audience understand it in some interesting way.
 recording release: yes license: CC BY  

3. The Internet: Protecting Our Democratic Lifeline
Brett Sheffield
tags: Security, Identity, Privacy
The Internet is arguably the most useful tool for enabling democracy that has ever been developed. It allows citizens to communicate, to organize and to disseminate information. It enables whistleblowers and journalists to expose corruption and malpractice. It enables people to communicate across borders, to share and discover each others cultures and beliefs, to promote understanding and encourage peace.

Unfortunately the Internet is today threatened from all sides by criminals, governments and corporations alike. Unless we take steps to prevent it, the weakening of this democratic tool will continue. Instead, we can choose to make it stronger and leave the next generation with a truly global, rights-enabled communication network.

This talk will explore the threats to our democratic lifeline from increased centralisation, tracking, censorship, and harmful legislation, and what we need to do about it.
 recording release: yes license: CC BY  

4. Dynamic Workloads need Dynamic Storage - using rook-ceph with k8s
Steven Ellis
tags: Containers
Container technology is a great enabler for developer agility, and these environments require an agile and dynamic storage platform. Gone are the days where you had to wait on your storage administrator to provision a new LUN or NFS mount for your project. Thanks to the Rook Project, and its ceph storage provider, you can now have a rapid, dynamic and elastic storage footprint for all your development needs.

This session will cover - rook basics, dynamic vs static persistent storage needs of workloads, and a demo of using rook-ceph.
 recording release: yes license: CC BY  

5. Privacy and Transparency in VPN industry
Ruben Rubio Rey
tags: Security, Identity, Privacy
VPN (Virtual Private Networks) is a prominent technology solution to help consumers and businesses to protect their data and secure their privacy. The VPN market has grown exponentially over the last 10 years. 

Ironically, customer's privacy does not always comes first for companies in the VPN business. Effective privacy requires real transparency.

That's why we are working in a complete VPN solution called TheVPNCompany which will be open source. This solution includes all elements required to run a VPN business: website, automation for all the systems via configuration management systems, and real time monitoring. 

Due to its open source nature, it is as transparent as you can get and anybody can audit how it works. 

This task also discusses the major issues that exists in the VPN market. Although most VPN services claim that they do not collect users data, we will discuss what user data needs to be collected to be able to operate a VPN service.
 recording release: yes license: CC BY  

6. OCIv2: Container Images Considered Harmful
Aleksa Sarai
tags: Containers
Most modern container image formats use tar-based linear archives to represent root filesystems, which results in many issues when using modern container images. In this talk, we will demonstrate a solution to this problem that we plan to propose for standardisation within the Open Container Initiative (code-named "OCIv2 images").

This talk is specific to the Open Container Initiative's image specification, but the same techniques could be applied to other systems (though we'd obviously recommend using OCI). 

In order to avoid the [numerous issues with tar archives](https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar) it is necessary to come up with a different format. In addition, layer representations result in needless wasted space for storage of files which are no longer relevant to running containers. Massive amounts of duplication are also rampant within OCI images because tar archives are completely opaque to OCI's content-addressable store.

Luckily the problem of representing a container root filesystem for distribution is very similar to existing problems within backup systems, and we can take advantage of prior art such as [restic](https://restic.net/) to show us how we can get significant space-savings and possibly efficiency savings.

However, we also must ensure that the runtime cost of using this new system is equivalent to existing container images. Container images are efficient at runtime because they map directly to how overlay filesystems represent change-sets as layers, but with some tricks it is possible for us to obtain most of the improvements we also gained in distribution with de-duplication.

Our proposed solution to all of these problems will be laid out, with opportunities for feedback and discussion.
 recording release: yes license: CC BY  

7. Authentication Afterlife: the dark side of making lost password recovery harder
Ewen McNeill
tags: Security, Identity, Privacy
Historically authentication was by username and password, perhaps with email as a password reset flow.   Users often wrote down their passwords (particularly older users), and possibly they only had a few passwords and it was pretty easy to try all of them.

Modern times have proven that passwords, particularly reused passwords, are insufficient security for any slightly valuable account.  So lots of people are using password managers, randomised passwords, and 2FA (hardware tokens, TOTP, etc).  Some accounts also require an additional authentication flow (email, SMS) for "new device" logins.  "Security Aware" users are using randomised answers to security challenge questions, perhaps also stored in their password managers.

This "security improvement" has a flip side: it's gone from being unlikely users will forget their passwords or get locked out, to being more likely users will lose access to their accounts through loss of 2FA or additional authentication paths (eg, phone number, or email), and more likely that users will struggle with lost password recovery.   And there's a darker side still: if the user is incapacitated, or has passed away, often someone else close to them will need to act "on their behalf" with those accounts (for legitimate transactions, send out notifications, or just to archive the account), and will likely struggle to gain access to them without the original users full set of password manager / 2FA / etc.

How do we balance the need to improve authentication security, and reduce the simplicity of malicious account takeover, with the need for there to be a way for legimate account use by bereaved family members, or other trusted associates?   There are no easy answers here, but considering the questions is important.
 recording release: yes license: CC BY  

8. Kubernetes Developer Workflows in Visual Studio Code
Ivan Towlson
tags: Containers
Great command-line developer tools are widely available for the Kubernetes ecosystem, but fabulous visual developer environments are coming along more slowly, hindering uptake among application developers who are new to container orchestration or who prefer visually rich development environments.

This session will introduce the free Kubernetes extension for the open-source Visual Studio Code (VS Code) editor, and show you core features that simplify and speed up the Kubernetes developer experience.  I’ll also show how you add to the behaviors and views in the VS Code k8s extension and show some extensions built on it, each illustrating a different way to make Kubernetes application development easier, faster, and more effective for your community or team.
 recording release: yes license: CC BY  

9. You Shall Not Pass
Peter Burnett
tags: Security, Identity, Privacy
Moodle is an open source learning management system, popular with universities. As Moodle has aged, some aspects of its security have fallen well behind industry standards for security. This talk will discuss the measures that have been taken to bring it up to scratch, and the ways that this can be applied to any application. The first priority in improving the security of the platform was targeting its password policy, which suffers from the older model of 'You must have atleast 2 uppercase characters'. To address this, a new plugin was developed for the platform, which acts much more in line with current NIST guidelines, including checks for compromised passwords using the HaveIBeenPwned API, and a user's personal information. This talk will show the guidelines we worked against, and how it can be applied to any applications password flow.

The next challenge to tackle was the lack of ways to augment an authentication flow. There are a huge amount of ways to authenticate to a Moodle, with support for all major SSO services, however, no potential to augment this process with additional tools such as MFA. To this end, work was done with Moodle HQ to implement a platform for this functionality on all pages that require higher security, such as changing and resetting a user's password. This talk will discuss what we learned along the way, and how to avoid common problems when implementing an MFA system such as security questions.

Finally, this talk will discuss the work that we are doing to implement MFA in a way that works alongside other authentication methods, such as SSO, with discussion on alternative factors, such as trusted IP networks.
 recording release: yes license: CC BY  

10. An intro to improving the security of your code with free analysis tools
Jason C Cohen
tags: Security, Identity, Privacy
It would seem that, despite the exponential growth in security products, security services, security companies, security certifications, and general interest in the security topic; we are still bombarded with a constant parade of security vulnerability disclosures on a seemingly daily basis. Why?  Most often, vulnerabilities come down to a flaw in either the source code, the logic of code, overall architecture, and in some cases the hardware design.  In this talk, we will take a look at one way to reduce the attack surface of your software; testing via static code analysis and dynamic analysis.  We will touch on the theory of how this technology works, when to use it during your development cycle, and then do a few live demos of a sampling of popular tools available for free to the Open Source community that you can leverage today to produce more secure software.  The talk and demos are geared towards new developers to build an initial awareness of the topics.
 recording release: yes license: CC BY  

11. The future of the desktop is on hypervisor powered containers
Alex Sharp, Anuj Dhavalikar
tags: Containers
Sick of dealing with a networking quirks that randomly break things? Sick of dealing with dependency management for libraries that just won't get along? Sick of worrying about the latest Firefox 0-days?

Worry no more . The future is already here — it's just not very evenly distributed. This talk will run you through an existing implementation of desktop containers using the Xen hypervisor, with comparisons to Docker and LXC. We'll go through what works, how it works, what doesn't work and what the future holds.
 recording release: yes license: CC BY  

12. The Psychology of Multi-Factor Authentication
William Brown
tags: Security, Identity, Privacy
Multi Factor Authentication is becoming more important in our infrastructure, with organisations starting to require it for sensitive accounts and more. So why does Multi Factor Authentication ... work? How does human behaviour influence our security and interact with threats that exist online?

Come along and learn about human interaction and design, the psychology of how humans interact with systems. We'll extend this into security to understand why human error is really the fault of poor systems design. Finally we'll talk about different threats and how MFA works to protect us from them - at a psychological level.
 recording release: yes license: CC BY  

13. Every Image Has A Purpose
Allan Shone
tags: Containers
Container images come in all shapes and sizes, with some being more useful then others. It's easy to have a one-size-fits-all approach to building images, but there are benefits to tailor making them for each appropriate situation. Images being deployed probably shouldn't be massive, nor should they include development tools, and it's probably unnecessary to run deployed integrations in a development environment. One approach for specifying images is to ask a few questions, naming appropriately along the way.
 recording release: yes license: CC BY  

14. An introduction to Penetration Testing using Kali Linux
Marcus Herstik

For those who have always wanted to know a little about hacking this is an introduction to some of the tools available in Kali Linux and how you can use it to check your network for security flaws (aka vulnerabilities). As an introduction to pen-testing, this is designed for novices who are interested in the Cyber Kill Chain, how to test common systems and those wanting to know how to get started, rather than just watching videos. As such this is not intended for advanced users.

Many people use tools and systems like Kali to run penetration tests without really knowing what we are doing. This tutorial will introduce a few tools and will have a vulnerable server or two for you to launch your attacks against ensuring this is a closer to real-life attack, rather than just theoretical.

Users will need a version of Kali Linux installed or the ability to quickly copy a virtual machine (VirtualBox is the suggested software for this). Short guidance at the beginning will be provided for this but users will need the VirtualBox software installed.

We will start by finding the device, then testing for vulnerabilities and attempting to gain access.
A step-by-step guide will ensure that all people get some action and hopefully a greater understanding of the mindset and complexity of what it means to "hack" in to something. 

Participant will need to bring their own laptop with the following minimum recommended specs:
4GB RAM - 2 will be used by the VM, 20GB HDD space, at least 4 processors, wireless network card, VirtualBox or similar ( VMWare or KVM/QCOW etc). Only VirtualBox will be supported and I allow 10 minutes at most for setup.

This tutorial will be run by one of the writers of the TAFE NSW Cybersecurity course and a tutor for Cybersecurity at SCU on the Gold Coast.
 recording release: yes license: CC BY  

15. Automated acceptance tests for terminal applications
Roman Joost

Acceptance testing is a method of testing an application from a users point of view. In this talk, I will demonstrate our approach to full automated testing a terminal email application (purebred) with the tasty-tmux framework. I'll elaborate the benefits and trade-off's, what problems we experienced and how we solved them.

Automating acceptance testing is challenging, because the tests can not adapt to timing sensitive changes in the application. This causes random failures and unresponsiveness. The longer these problems are ignored, the longer value diminishes and investment increases for workarounds and fixes.

The audience will get a better understand of what it takes to automate timing sensitive tests. The concept, problems and solutions are language agnostic applicable to any terminal platform and application.

Project URL: https://github.com/purebred-mua/tasty-tmux

Talk Outline:

    What is acceptance testing and why should you automate it
    What other choices of testing did we have and why we haven't chosen them
    The effort we put into automating our tests
    What we gained with automating our acceptance tests
    Future ideas
 recording release: yes license: CC BY  

16. We know when you are sleeping: The Rise of Energy Smart Meters
Rachel Bunder

Australia is rolling out smart energy meters to all homes. Instead of an analogue meter that needs to be physically checked each billing cycle, a smart meter monitors your energy in 15 minute intervals and sends this data to your energy network providers.  Additionally, many people have smart home setups which often monitors your home energy usage in even more detail. Even many solar systems monitors your energy consumption. 

But how much can you actually tell from this data? What could this be data used for? Who even has this data? In this talk I will show what energy data is actually being collected about you, who has access to it and what can be inferred from it. I will also discuss the wider implications of having a “smart” home and “smart” energy grid.
 recording release: yes license: CC BY  

17. From bits to legs to locomotion: Building a hexapod from the ground up
Daniel McCarthy

Hexapods are six legged robots which are a staple in academic and hobbyist circles due to their versatility and static stability. Hexapod platforms however, specifically six degree of freedom (6DOF) Hexapod kits are commonly quite expensive. Expensive enough that I couldn't afford one -- So, I set out to build an inexpensive (order of magnitude cheaper) 6DOF Hexapod.

This talk is aimed at beginner and intermediate hobbyists who are interested in more advanced robotics topics, it sets out to to answer one main question: How on earth does one design and build a 6DOF Hexapod (hardware and software) from the ground up?

In it the design process, challenges and trade-offs as well as how one gets a Hexapod to walk without tripping over itself (inverse kinematics and gait algorithms) will be examined.
 recording release: yes license: CC BY  

18. Snek: A Python-Inspired Language for Tiny Embedded Computers
Keith Packard

Tiny embedded computers, like the original Arduino, are great for
automating simple tasks. What they are not great at is providing an
easy-to-learn environment for new programmers.

As a part of a middle school robotics course based on Lego, I've
developed a new language, Snek, which runs on these machines. Snek can
run in as little as 32kB of ROM and 2kB of RAM. It provides a simpler,
safer, easier to explore environment than C++. Snek is a subset of the
Python language and comes with a host-based IDE written in Python that
runs on Linux, Mac OS X and Windows.

This presentation will describe the Snek language along with a few of
the interesting implementation details including:

 * A new parser generator, lola, that generates
   a parser 1/10 the size of bison

 * An in-place compacting garbage collector

 * A fine hack for representing values in 32 bits that includes 32-bit
   floats

 * Some challenges with Python syntax and
   semantics which make it difficult to fit into a small
   environment.

There will also be a demonstration of a few Snek-based Lego robots
along with a description of how Snek has been integrated into the
classroom environment. Comparisons with other embedded Python
implementations will also be provided, including Micro Python, Circuit
Python and full Python running on systems like the Raspberry PI.

Attendees will learn something about how interpreted Python
implementations operate, how Python can be used in embedded systems
and what teaching programming to middle school students (10-14 years
old) is like.
 recording release: yes license: CC BY  

19. Building a zero downtime Kubernetes cluster
Feilong Wang

Having recently gone through the experience of building, implementing and running a Kubernetes platform service in its public cloud, Catalyst Cloud has some interesting experiences and war stories to share about the journey. One of the most in-demand for a Kubernetes service's features is monitoring of the cluster’s health and ability of the orchestrator to deal with unhealthy instances and triggering replacements when needed, maximizing the cluster’s efficiency and performance.

This presentation and demo are targeted for anybody interested in having closed-loop automation for self-healing and maintenance in a Kuberntes cluster. Attendees will learn how this can be achieved with zero downtime for the user's application. And there are some good tips will be shared about how to runn auto scaling, rolling upgrade and auto healing for Kubernetes cluster.
 recording release: yes license: CC BY  

20. Piku: git push deployments to your own servers
Chris McCormick

Weary traveler, you have come a long way and fought a brave battle against the ensnarement of that proprietary platform-as-a-service of which we shall not speak it's name. You know the one. The one where you can `git push` with a couple of config files and your full stack magically rolls out like a red carpet at the oscars. No more staying up until 2am apt-get installing, and breaking your brain on nginx configs. It is just so deliciously easy, with just one teensy little catch. You don't own the server. It is deployed into a locked down proprietary fortress where any old Bezos can gaze like the beady Eye of Sauron upon your users' data. Whats more the deployment process is completely opaque. You don't have access to any of the source. Horrifying!

Well I'm here with a glorious Free Software salve. After leaving the dark side of the PaaS which we shan't mention, I was lucky enough to find Piku. It was a dream. No Kubernetes clusters or Docker swarms needed here. Just a thousand lines of Python and some shell scripts. You can grok the source in one night. Point the bootstrap script at a fresh VPS (or Raspberry Pi!) and a couple of minutes later you have your own multi-tenant app server for less than the price of a single one of those dance-with-the-devil PaaS accounts. Configure a git remote, write a couple of lines in your Procfile, `git push` and you're away! Your app unfolds upon your own server like an origami space station solar panel in all its shining glory. Dependencies installed, SSL cert obtained, Nginx configured, all done for you.

This talk will go in depth into the history, minimalist philosophy, and usage of Piku. By the end of it you will be able to do zero-fiddling `git push` deploys to your own servers.
 recording release: yes license: CC BY  

21. I Was Wrong
Karen Sandler

Being involved with free and open source software teaches us that no one and nothing is perfect. Everything can be improved.  Even fundamental assumptions should be revisited over time to make sure that they are still valid in light of current circumstances and new information.  This talk will explore the speaker's confessions of her past views and full turn around on a variety of issues from diversity to software licensing, including the journey from being someone opposed diversity initiatives to actually running one.  By understanding that the issues in our field are complicated and that we may not have all of the facts on a given situation  our co-contributors who seem to have wrongheaded  ideas may be well intentioned, we can make our software and our communities stronger.
 recording release: yes license: CC BY  

22. Everything Awesome about GPU Drivers
Daniel Vetter

About 10 years ago the first kernel modesetting drivers landed in upstream, with promise that this will usher in a new era in GPU drivers: Multi-application rendering, new compositors and maybe even working suspend/resume support. Then years later it is time to take stock and see were we are.

Spoiler: Things are really, really good!

Come and hear about the story of a community that grew massively: A subsystem with drivers spanning from the tiniest with less than a thousand lines to the largest with over 2 million. About how to create some really good shader compilers. And everything in between. Plus a few attempts at explaining how this success was possible.
 recording release: yes license: CC BY  

23. Verified seL4 on secure RISC-V processors
Gernot Heiser

RISC-V has many attractions, ranging from the openness of the architecture, its clean-slate design based on simplicity and scalability, as well as the RISC-V Foundation's strong commitment to security from the ground up.

As such, RISC-V is an extremely attractive platform for the open-source seL4 microkernel with its unrivalled verification and security story. This has led industry players, especially Germany-based HENSOLDT Cyber, making a major bet on the combination of RISC-V and seL4, resulting in them funding the formal verification (implementation correctness proof) of seL4 on RISC-V.

I will discuss our experience with implementing and verifying seL4 on the RISC-V architecture, and related open-source technologies we are employing to allow us to build secure systems. This includes the CAmkES component framework that supports a security-by-architecture approach, and the Cogent systems language that is designed to reduce the cost of verified system components such as file systems and device drivers. 

One interesting aspect are timing channels. We have been working for a number of years on *time protection*, the temporal equivalent of memory protection, as a systematic timing-channel prevention. Our experience on x86 and ARM processors is that they lack the mechanisms to do this completely. RISC-V presents an opportunity to get this right, and I will report on my experience working with the RISC-V Foundation's Security Standing Committee to get the required mechanisms into the processor specification.
 recording release: yes license: CC BY  

24. Like, Share and Subscribe: Effective Communication of Security Advice
Serena Chen

For everyday people, security advice is confusing, boring, and ever changing. In response, we’ve developed what essentially are superstitious habits — theatrical, security-flavoured actions that we repeat in hopes of protecting ourselves from “the hackers”.

There are two big problems here. First, how do we effectively communicate relevant security advice to non-experts? And secondly, is that advice even persuasive enough to encourage real behavioural change? What kind of advice should we be conveying, and to whom?

In this talk we cover why everyday people don’t follow security advice. To help us come up with some solutions, we introduce concepts from behavioural design, psychology and medicine. And I put the theory to the test by trialing some unconventional ways of communicating security to the masses.
 recording release: yes license: CC BY  

25. Good, better, breast: Building a sensing mastectomy prosthetic with open hardware
Kathy Reid

In Australia every year, around 18,000 women are diagnosed with breast cancer [1]. Many will go on to have breast removal surgery, called a mastectomy. Only 12% of women who have a mastectomy will have reconstruction, and will instead opt to wear a silicon-based prosthetic. 

These prosthetics are "dumb" - they're just silicone. They have 0 USB ports. What a great opportunity for open hardware!

As part of her term project in the Masters of Applied Cybernetics at the 3A Institute at The Australian National University, Kathy Reid, herself a breast cancer survivor, developed a prototype called "SenseBreast" - a sensing, smart, mastectomy prosthetic based on an RPi 3B+ and a Sense HAT. This was a "mucking around" project to learn Python, and she didn't expect it to work. 
Narrator: It worked. 

In this poignant, funny, challenging, technical, entertaining and irreverent presentation, she explores; 

- motivations for the project, including a desire to keep sensor data private and personal - after all, who's watching? 
- hardware design and sensor challenges in open hardware and Python
- prosthetic design and how to build a fake breast to contain hardware
- lived experience wearing a smart prosthetic
- implications for this technology, such as in post-mastectomy recovery 
- and reflections on the broader landscape of wearable technology


[1] https://www.bcna.org.au/media/6101/bcna-2018-current-breast-cancer-statistics-in-australia-31jan2018.pdf
 recording release: yes license: CC BY  

26. What Makes Decentralisation Hard? And How Do We Overcome This?
Martin Krafft

Peer-to-peer technology has been around for decades, and has significantly shaped the file-sharing industry, much to the dismay of the media conglomerates. It wasn't until recent years, however, that the underlying concepts have entered other domains, such as communication tools, and storage. And of course: blockchain. And yet, despite the availability of technically sound and fully functional projects, widespread adoption is nowhere near.

Drawing on his experience from working with digital identity startups, Matrix, Scuttlebutt, microblogging protocols (GNU social, Mastodon), and various blockchain projects, Martin takes a shot at identifying the reasons why the uptake is so slow. Using findings from his PhD research on the adoption behaviour of Debian developers with respect to version control systems and packaging techniques, he furthermore concludes with a number of suggestions that might help in taking projects beyond the early adopter phase.
 recording release: yes license: CC BY  

27. The Linux network stack extension for DDoS mitigation and web security
Alexander Krizhanovsky

Back in 2013 we started development of a Web Application Firewall (WAF) on top of one of the widespread HTTP accelerators. That time we realized that modern HTTP accelerators were designed to service normal HTTP requests and don't suite well for filtering massive HTTP traffic from malicious clients such as DDoS bots. A WAF protecting huge web resources or thousands of small web sites also experiences overloading due to deep analyzing of HTTP and web content.

So we started to develop our own hybrid of HTTP accelerator and a firewall, Tempesta FW, to address the problem of servicing and filtering massive HTTPS traffic. It can be used as standalone web acceleration and protection system as well as a WAF accelerator performing pre-filtering for more advanced WAF. Tempesta FW is an open source Linux kernel module integrated into the Linux TCP/IP stack and implementing rich set of HTTP security features.

Tempesta FW implements HTTPtables, HTTP requests filtering tool which can be used together with nftables to define filtering rules on all network layers on the same time. Strict and flexible HTTP fields verification, HTTP cookies and JavaScript challenges, as well as various rate limits, are also implemented to efficiently block HTTP(S) DDoS and Web attacks.

This talk describes common issues with filtering malicious HTTPS traffic on modern HTTP accelerators, how Tempesta FW solves them, and several low-level topics such as  SIMD HTTP strings processing algorithms, but mostly I'll concentrate on TempestaTLS - a fork of mbedTLS to implement TLS handshakes in the Linux kernel. TempestaTLS cooperates with the TCP/IP stack to send records of optimal size and avoid copying. The handshakes state machine is carefully optimized to provide highest performance. I'll show performance benchmarks comparing TempestaTLS with OpenSSL in workloads close to real life DDoS attack against TLS handshakes.
 recording release: yes license: CC BY  

28. Velociraptor - Dig Deeper
Mike Cohen

This hands-on lab introduces delegates to Velociraptor: a new open-source (AGPL) platform to perform surgical forensic evidence collection and incident response across a distributed computer network. It’s fast, precise, powerful … and free.  It also supports Linux, Windows and MacOS. Velociraptor is a unique tool since it offers a query language so that users may query their end point flexibly in response to new threat information.

Participants will download the latest Velociraptor executable, then configure and deploy a Velociraptor server and agent before collecting and examining evidence from across their personal test network. This workshop will focus on Linux. 

The instructors will walk through several real-life investigation scenarios, including collecting evidence of program execution, searching for evidence of lateral movement, hunting for back doors and hunting for attacker IOCs. We also explore how Velociraptor can be used to perform continuous security monitoring on the endpoints. Participants will become familiar with the main deployment options, elements of the Velociraptor interface and the procedure for configuring and executing basic hunts, before moving to the powerful Velociraptor Query Language (VQL) which opens the doors to developing custom hunts to meet specific investigation needs. We’ll also be covering management and monitoring features which ensure that Velociraptor can be used at scale, with minimal impact on network performance.
 recording release: yes license: CC BY  

29. What UNIX Cost Us
Benno Rice

UNIX is a hell of a thing. From starting as a skunkworks project in Bell Labs to accidentally dominating the computer industry it's a huge part of the landscape that we work within. The thing is, was it the best thing we could have had? What could have been done better?

Join me for a bit of meditation on what else existed then, what was gained, what was lost, and what could (and should) be re-learned.
 recording release: yes license: CC BY  

30. Panfrost: Open Source meets Arm Mali GPUs
Robert Foss

Over the past years support for the different Arm Mali series of GPUs has been crystalizing in the Open Source space.

The first few steps towards supporting some Arm GPUs hardware were taken in 2012, and was aimed towards the low-end Mali 2/400 series of GPUs.
While this work showed that it was indeed possible to create an Open Source driver, it would be a long time until the Lima driver actually materialized.

Very much unlike the Open Source driver for the Mali 2/400 series of GPUs, support for the Mali-T and Mali-G series started to be looked at only in 2017. Since then development has progressed at a furious pace. The mesa driver, Panfrost, has now been merged and provides initial support for the T700 and T800 series of GPUs based on the Midgard architecture.

In this talk Robert will walk you through the process of creating a driver for a new GPU, from reverse engineering to upstreaming and then finally shipping a new Open Source driver.
 recording release: yes license: CC BY  

31. Planning for and handling failures from open hardware, aviation, to production at Google
Marc MERLIN

This talk will look into the failures I've encountered in multiple fields, and learned from reading from other people's failures, a common practise in aviation that has saved countless lives in not re-creating failures and accidents out of ignorance.
You will also hopefully improve your spidey sense in things that could go wrong and ask the right questions or implement the right procedures or fixes before they become necessary after downtime.
As they say in aviation "experience is a cruel teacher: she gives you the test first, and if you survive, then you get to learn the lesson".

Examples:
- how to avoid spectacular lipo fires or circuit burns
- when aviation goes wrong, from AF447, QF32, Boeings 737 Max, and more
- failures and avoiding failures in production at Google, including how automation can go wrong
- why mkdir -p 0755 /path/to/dir can take you down hard
- you know binary drivers suck, but do you need more examples? If so, come on by
- why this temporary fix will bite you hard soon after
- a problem is not actually fixed until it's root caused
 recording release: yes license: CC BY  

32. Securing Container Runtimes -- How Hard Can It Be?
Aleksa Sarai

In the past few years, we have seen a varied array of different security vulnerabilities in container runtimes (often resulting in breakouts or other severe attacks against the host system). As a result, some members of the community have been looking into whether there are more fundamental issues at play which could help resolve some of these problems. In this talk, we will discuss what are possible problem areas for container runtime security and our attempts to solve some of these issues through both kernel-space and user-space protections -- and how some of these protections may help many other programs outside the container runtime community secure themselves against attackers.
 recording release: yes license: CC BY  

33. Privacy is not Binary: A discussion of data systems, ethics, and human rights
Elizabeth Alpert, Amelia Radke

In an increasingly digitised world, societal understandings of the intersection between innovative technologies, ethics, and human rights have never been more critical. However, different cultures and different sectors have differing understandings of all these things. A simple categorisation of human data as being either public or private is insufficient to describe the complexities of a single human social group, let alone the full complexity of human life and interaction that is being recorded in more and more detail every day.

In Australia, understanding the social impacts of this new regime of digital data is integral to facilitating economic, social and health benefits, without further entrenching inequality for already vulnerable peoples within society. Furthermore, the application and impact of innovative technologies from all sectors including the tech industry is highly dependent on social acceptance, which cannot avoid public debates around ethics, human rights, and responsible innovation.

There are currently many conversations about the usage of human data and issues of privacy, ethics, and digital human rights in government, academia, activist communities, technology in general, and the information security and open data communities in particular. Unfortunately, most of these conversations are happening independently of each other, and as such are missing out on the knowledge, experience, and perspectives of other sectors.

This presentation is a discussion between an anthropologist (Dr Amelia Radke) specialising in digital human rights and a data infrastructure engineer (Betsy Alpert) on how the notions of privacy, security, and ethics play out in our respective fields. We argue that good data needs collaboration and deliberate design, particularly in an our ever more data-centric world.
 recording release: yes license: CC BY  

34. How internet congestion control actually works in the bufferbloat age
Dave Taht

You start a big upoad or download, and your ssh connection goes to heck, web pages get delayed, your videoconference glitches, or you start missing your opponents in your game.  Bufferbloat is one cause. While the bufferbloat problem is largely fixed in Linux, it's rarely configured properly on the gateways, and thus remains at epidemic proportions across the Internet - and this talk touches upon how to configure that stuff properly - but... why does the network get slow? *How* is the network supposed to deal with overload? This talk is a deep dive into how TCP is supposed to work, and goes into concepts like Slow Start, Congestion Avoidance, windowing, fair queuing and active queue management, the roles of packet drop and ECN, and alternate tcp's and transports such as BBR and QUIC, in the hope that deeper knowledge of how our most basic network transports work will lead to the design and implementation of better systems on top of them.

There will be a couple live demos using humans as packets, and there will be a quiz !
 recording release: yes license: CC BY  

35. Clevis and Tang: securing your secrets at rest
Fraser Tweedale

Full disk encryption and, more generally, encryption of secrets at
rest are essential tools in the security toolbox.  But deploying
encryption at rest can have costs: latency (downtime), repetition
(productivity loss), proneness to error (typos; "was that '1' or
'l'?"), challenges in supplying a passphrase when needed (e.g.
headless systems).  Automated decryption often relies on delivery of
escrowed keys (a third party knows your secret).

We can do better.

_Tang_ [1] is a protocol and (along with the client-side program
_Clevis_ [2]) software implementation of *network bound encryption*;
that is, automatic decryption of secrets when a client has access to
a particular server on a secure network.  It uses McCallum-Relyea
exchange, a two-party key computation protocol based on Diffie-Hellman
where only the client can compute the key!  _Clevis_ [2] uses the
amazing *Shamir's Secret Sharing* algorithm to implement unlock
policies with thresholds that can include passphrases, Tang servers
and TPM-sealed secrets.

In this talk I will outline the use cases, explain the algorithms
and demonstrate these tools.  The live demo will set up a machine to
automatically decrypt a LUKS volume when a required number of Tang
servers are available.  I will conclude with a discussion of
limitations, assumptions and threats.

[1] https://github.com/latchset/tang
[2] https://github.com/latchset/clevis
 recording release: yes license: CC BY  

36. Kicad for software developers
David Tulloh

This tutorial will take you through the process of designing a circuit and PCB using Kicad. 

We will design a watering system relay controller, with wireless communication and solar power. From block diagram, to circuit, to PCB.

This will be a guided journey, a Disney on rail adventure where the story, components and design have been prechosen for a cultivated, time controlled, experience. The focus will be on the use of the tools, eeschema to design the circuit and pcbnew to layout the PCB.

You will be shown each technique then given time to practice it by finishing off the other elements while I circulate to assist. Awkward silences will be filled by discussions of the design decisions that have been taken, amusing anecdotes and opinionated advice on what consists of good design. Checkpoints with preprepared files for catchup will be provided in case you struggle in any particular area

Please ensure that you have Kicad 5 installed and have tested that it runs, we will not have time during the session to fix faulty installs. You will also need an internet connection to be able to download preprepared files and pdf datasheets.
 recording release: yes license: CC BY  

37. Control Flow Integrity in the Linux Kernel
Kees Cook

Like all C/C++ programs, the Linux Kernel regularly suffers from memory corruption flaws. A common way for attackers to gain execution control is to target function pointers that were saved to memory. Control Flow Integrity (CFI) seeks to sanity-check these pointers and eliminate a huge portion of attack surface. It's possible to do this today with the Linux kernel (or any program) with Clang/LLVM's CFI implementation.

This presentation will discuss how Android is using Clang's CFI in the Linux kernel for recent phones, how it is being upstreamed, and what you can do to use CFI yourself. We will explore what Clang actually inserts for code, data, and symbols to protect indirect calls, what needed fixing in the kernel to support it, and what's still missing. We'll wrap up with a short demo of CFI foiling a kernel attack.
 recording release: yes license: CC BY  

38. Open collaborations: leadership succession and leadership success
Ann Smith, Myk Dowling

Some projects seem to thrive year after year. Others get forked over and over until potential contributors face nigh-impossible decisions. Some just wither away and never produce a product.

What keeps one project alive while others fail to thrive?

We look at an open source project in a crisis involving leadership and critical stakeholders, at how the crisis was resolved, and what the world of business research tells us about leadership. We can apply that body of knowledge to make our projects function better.
 recording release: yes license: CC BY  

39. Desktop Linux, without a keyboard, mouse or desk
Shervin Emami

Humans are extremely fast at using a keyboard and mouse on a desk, but all 3 of these have bad long-term health risks. Are there any OTHER ways of using a desktop PC? What if you use Linux instead of the more common OSes? What if you're a hardcore full-time Linux programmer and therefore need a lot of functionality and customisation, and you need enough speed and reliability to use it long term? This talk will show some examples of how these are all possible. I'm a full-time Linux computer programmer that doesn't use my keyboard or mouse. Note that I said it's possible, but I didn't say it's easy!

I'll show demos of speech recognition (for dictation, programming, and mouse control), head tracking & eye tracking mouse, foot pedals, handheld switches, RFID password entry, LED status, and forced regular breaks using Bluetooth proximity detection to make sure you actually step away from the computer regularly.
 recording release: yes license: CC BY  

40. Electronics from your Kitchen Drawer
Peter Chubb

Most of the high street electronics shops (with one notable exception) have closed down, so one has to order components online.  But, resistors, capacitors and inductors are fairly easy to make, from materials commonly found around the house.    Active elements to provide gain, or rectification, are a little harder to come by, but possible.

In this talk I'll show off home made capacitors, coils, and resistors, and negative-resistance active elements, and describe my adventures in developing a radio receiver without using any commercial electronic components.
 recording release: yes license: CC BY  

41. It's All About Timing
Dave Chinner

Before you can run a timed motorsport event, you have to have a timing system. But what do you do when the requirements for the timing system are severe enough that the only off-the-shelf solutions are so excitingly expensive the event does not have the budget to obtain access to such a system? You guessed it:

"Dave, can you build a timing system for us?"

In this talk I'll walk through the challenges of designing and building a millisecond accurate timing system using open source sfotware and tools that has timing points far enough away that running cables is not practical, is in terrain that makes radio comms largely impossible, has to be set up on site in less than 60 minutes by one person and include safety systems integrated into race control protocols.

This timing system involves Beaglebones, designing and building custom hardware capes and the software to interface with them, as well as all the software to control the system and handle all the timing logic and safety procedure interlocks. It involves python, C, MQTT, wide area networking, highly accurate distributed time synchronisation and, of course,  fast cars.
 recording release: yes license: CC BY  

42. Open Source Citizenship
Josh Simmons

We all rely on open source software and, as our reliance grows, so do our policies for managing compliance and programs for cultivating mutually supportive relationships with the communities behind the software. 

In this session, attendees will be given a thorough accounting of: 
* what companies are doing to support open source communities, 
* what kind of support open source communities are actually asking for, 
* how to build a culture of open source citizenship in your company,
* and how to make it easy for companies to support your community.

Based on discussions with industry and community leadership, we'll establish a current and sweeping perspective on corporate open source engagement. 

By understanding the state of the art, and knowing what needs remain unmet, we can help our companies be even more effective in supporting healthy communities. And not just because it's the right thing to do... 

After all, healthy communities translate into greater productivity, innovation, and stability, and better security!
 recording release: yes license: CC BY  

43. Open AND High Performance Computing
Hugh Blemings

Open hardware for embedded and general purpose applications with low to medium compute performance requirements has become almost commonplace, a response to a growing understanding of the need for solutions that embrace libré principles.

Open hardware for server class/high performance computing requirements is a relatively new concept but is, appropriately, becoming an area of focus for data centres, supercomputers, and security conscious workstation/end users alike. Contributors to the OpenPOWER project have been working together to create an open ecosystem based around the POWER Instruction Set Architecture that enables truly open -and- high performance compute solutions. 

The resultant systems extend from high performance embedded systems, desktop workstations up to the worlds fastest supercomputers and have an entirely open software stack - firmware to remote management to hypervisor to OS to applications.  Many of the systems have hardware that is likewise entirely open source at the schematic, PCB and, increasingly component level.

Put colloquially: OpenPOWER systems are the only commodity high performance systems out there that don’t have any funny hidden management engines, or binary blobs of executable code in their firmware.  Some even ship with a recovery DVD that in addition to the full source code for everything running on the machine, include the schematics and other technical drawings too.

This session will explain the importance of open hardware and software at all levels of the compute environment - embedded/IoT to desktop to hyperscale systems. From here a brief introduction to OpenPOWER and the OpenPOWER Foundation as well as an update on the ecosystem, the status of the hardware and software stacks in question as well as an overview of some of the OpenPOWER hardware out there.  There will be at least one system of great interest to LCA attendees that will have it's Oceania debut at this talk.

This will be a technical/community/ecosystem talk, not a “sales pitch” :)
 recording release: yes license: CC BY  

44. No Docs? No Problem! From Zero to Full Documentation in Less Time than You Think
Nathan Willis

This session is a guide to developing and deploying technical documentation for a mature codebase in minimal time. It details practical advice and lessons learned from the speaker's personal experience taking the HarfBuzz library from a starting point with virtually no documentation to a full-fledged set of internal and external references, plus code-integration and end-user guides.

The topics covered will include designing documentation from the top down as an "alternative API" to the code itself, strategies for staging a large documentation project is discrete parts that can be deployed successively, building documentation in a continuous-integration environment, and practical tips for documentation teams.

The presentation will also showcase the benefits of rolling out documentation in parallel to development, including insights on system architecture and improved community involvement.
 recording release: yes license: CC BY  

45. The Story of PulseAudio and Compress Offload
Arun Raghavan

While PulseAudio has been a standard component in desktop and embedded Linux for a decade now, it was always written with uncompressed audio data in mind.

To save power. modern SoCs often support "compress offload", where an efficient DSP can receive compressed MP3/AAC/... data, decode and render it to be played out.

In this talk, I'll describe how this was implemented in PulseAudio, what challenges and tradeoffs were involved, and what the future might hold for this work.
 recording release: yes license: CC BY  

46. Linux in the Cloud, on Prem, or... on a Mainframe?
Elizabeth K. Joseph

Discussions around where to host your Linux-based infrastructure tend to center around whether you should use the cloud or your own on-premises hardware. Architectures beyond x86 are rarely discussed. This talk will give you a glimpse into the modern mainframe running Linux, and why it and other alternative architectures like ARM and POWER should be considered.
 
We’ll first look at the birth of time-sharing and the first Virtual Machine (VM) technology that surfaced on the mainframe in the 1970s, driven by community-based efforts and with help from one of the oldest computer organizations in the world. From there, we’ll have a look at how VM technology allowed Linux to be run on the mainframe by a group of hobbyists in the late 1990s, and then was swiftly noticed by IBM and development continued for the polished product we see today.
 
Today, there is an entire product line of mainframes that exclusively run Linux (RHEL, SLES, or Ubuntu) rather than the data-focused, batch-processing operating systems they are best known for. For the security-conscious, encryption technology built into the processor and on additional PCIe cryptography cards is accessible in Linux, allowing for end-to-end encryption of data at rest and in flight that doesn’t burn all of your regular processing resources. Enterprise-grade hardware that also has built-in redundancy reduces the need for management of a fleet of x86 servers, and the entry level mainframes today even fit into a 19” rack space in the datacenter. There are even a few services in the cloud run by IBM that transparently to the user use a mainframe in the back end.

Looking at the future, with policies and laws making data protection even more important, there will be an increasing need for systems that have hardware-driven encryption technologies built in. Power consumption from these increasing needs will also continue as the rate of on-demand data processing and storage continues to soar. As a result, we'll likely see alternative architectures that put power savings become increasingly compelling.
 recording release: yes license: CC BY  

47. Decoding battery management data - back in the old school
Paul Wayper

It's great to live in an era of self-documenting APIs, easy-to-read markup, good documentation and limitless bandwidth and memory.  However, a lot of systems don't have those luxuries.  Projects as diverse as SaMBa and Nouveau have relied on decoding and inferring meaning from binary data output from closed systems with no documentation.

For my own part, I wanted to read the battery status from my electric motorbike (http://mabula.net/3faze/).  The 'normal' way to do this is via a USB connection and a Windows program, but this is inconvenient when you don't run Windows and definitely much harder to do when actually riding the motorbike!  So I wanted a way to read and record the cell voltages and overall pack status, and even link that to GPS information so I could see how fast I was going and how much power I was drawing for a ride.

This talk will include actual data, Raspberry Pis, Grafana, serial connections, Python, InfluxDB, GPS modules and Perl - not necessarily in that order.
 recording release: yes license: CC BY  

48. Engineer tested, manager approved: Migrating Windows/.NET services to Linux
Katie Bell

Running .NET Framework code on Linux used to be something that you would approach with caution, and only if you needed to, especially if the code was originally written to only ever work on Windows. With improvements to Mono and the release of .NET Core, this is now easier, more reliable and more officially Microsoft supported than ever.

At Campaign Monitor we've got lots of Windows-specific C# .NET code, in particular we had these 78 http-serving and background task processing services that caused us some headaches. I and a couple of fellow engineers set out to convince management that a project to migrate them all to .NET Core (on Linux) was worthwhile, and convince them we did. I will take you through this six month project from its start as a crazy idea to its successful completion and results. We'll learn a lot about .NET Core on Linux and Docker, what is easy to migrate (and what is not), the expected and unexpected issues we encountered and how to get a project like this off the ground and done.
 recording release: yes license: CC BY  

49. When Jargon Becomes Gibberish
Casey Schaufler

You've most likely been there: you're a half hour into an important technical meeting and you realize that of the last dozen words you've heard you only recognized two, and neither would seem to have any bearing on the topic at hand. You look around and see that everyone but the presenter appears to be mentally grasping for some key understanding what's being said. Your carefully gathered notes say "SDL PRS review SS2 Smokey Lagoon PKR BKM". Somehow, the jargon you've relied on for precise and detailed technical communication as devolved into incomprehensible gibberish.

Casey Schaufler, who's been working in operating systems development for the past 40 years, explains the reasons technical documents and presentations so often become impossible to decipher.  There's much more to it than the unguarded and unexplained use of acronyms, abbreviations and code words. The impact of language background on the use of technical terms is explored. How cultural differences between individuals and organizations can interfere with technical communications, and how that differs from general communication is discussed. What can go wrong when technical documentation is mistranslated into plain language, resulting in words with completely different meaning also gets much deserved attention. Casey will provide examples, some humorous, some maddening and some otherwise which demonstrate the problems we face. In the end, hopefully useful and pragmatic approaches for avoiding the worst gibberish production are presented.
 recording release: yes license: CC BY  

50. Compiling Your Story: Using Techniques from Compiler Design to Check Your Narrative
Jon Manning

Programmers are used to their compilers catching tiny problems in their code. When you're a writer, it's harder to catch these problems. Wouldn't it be nice if you could run an error-checker on your dialogue?

This talk discusses the application of compiler optimisation and correctness checking techniques to branching narratives, which allow authors to verify the logic of their narrative independently of running it in the game.

Yarn Spinner is an open source narrative design tool used in many games, including Night in the Woods (Infinite Fall / Finji) and OK KO: Let’s Play Heroes (Capybara Games / Cartoon Network), that allows both writers and narrative designers to write, edit, script and manage their game's dialog using a simple scripting language that's reminiscent of Twine. While most games devise their dialog in script form, it's typically implemented using another system, such as a spreadsheet, or node-based tools. By contrast, Yarn Spinner implements a domain-specific language designed to be easy for a non-programmer to write their dialogue in, and easy for a programmer to implement gameplay-specific behaviour.

Compilers for most programming languages are designed to catch common mistakes, and give warnings where appropriate. In recent years, tools for static analysis of programs - a technique where a compiler analyses the behaviour of code without running it - have allowed programmers to detect subtle bugs that a simpler compiler could not. For example, it's possible to identify a combination of branches that lead to an uninitialised variable being returned from a function.

Because Yarn Spinner is a complete programming language, we are able to apply the same techniques from compiler design to assist writers. For example, a compiler is able to analyse a branching narrative tree, take into account the variables that are checked to decide what options are available, and determine that a line of dialog can never be reached.

These techniques allow for more reliable testing of dialogue. Testing a branching piece of dialogue often relies on the tester repeating a number of decisions in order to set up the necessary state, which can be repetitive and tedious. However, a static analyser is able to identify what the values of certain variables _must be_ in order for a line to be accessed, which allows the tester to jump straight to the lines under test, skipping large amounts of per-test setup time. This allows someone who’s trying to test out a conversation to do it in realistic conditions, skipping past the repetition while also guaranteeing that the state of the dialogue is the same as if they’d performed it manually.

In this talk, I’ll discuss work on Yarn Spinner that implements symbolic execution and basic block analysis, which allows writers to easily spot problems in the implementation of their narrative, saving development and testing time, and avoiding wasted production resources.

Come and learn how to get started with these techniques, and how to use them in Yarn Spinner, the open source narrative design tool!
 recording release: yes license: CC BY  

51. A B C of 3D : Introduction to making 3D art using blender
sreenivas alapati

It's really hard to escape the 3D buzzword these days. You find it used in all sorts of places, right from the movies you watch, games you play, 3D printing, webGL graphics in the browser and AR/VR applications. In this tutorial, we are going to cover the basics of 3D and do a hands on session on creating 3D Art using a professional open source 3D application, Blender. 

Takeaway :
By the end of the session…
> You will know a broad overview of 3D Art.
> Have a working knowledge of the professional open source 3D application, Blender.
> Get a deeper understanding of the workflow for creating 3D art.

Prerequisites:
> Laptop with a decent GPU (any modern laptop)
> A mouse with a middle click button (scroll which is clickable)
> Download and install Blender from https://www.blender.org/download/
 recording release: yes license: CC BY  

52. Musings of an Accidental Chair - change from the inside out
Lyndsey Jackson

In 2017 Lyndsey became the Chair of Electronic Frontiers Australia. Over the past two years the EFA board and volunteers have worked to increase engagement, participation, and be more effective - not easy when everyone is a volunteer. How have we reached this nexus where underfunded and under supported organisations and individuals are the key line of defense in speaking up for digital rights, security, privacy and ethics? Strengthening community partnerships has meant strengthening internal operations, and juggling the balance of keeping an organisations lights on. And that alone doesn't leave much time for deep strategy on community messaging and public engagement. 

How can we do better? Lyndsey entered into the world of digital rights via her campaign work building the website and social media accounts for #notmydebt, which provided a voice for people affected by robodebt. This campaign proved people do care about rights, the impact of automation, and maintaining integrity in Australian values, systems and government. Survey after survey tells us Australian's care about privacy and security, and as technologists we hold the tools, networks, and the expertise to shape a better future. Let's scope the work we all need to do, together.
 recording release: yes license: CC BY  

53. Collecting information with care
Opal Symes

How do you sign up to a website, if you don't have a first name? Or a name that doesn't fit in 30 characters?
How are you supposed to buy an airline ticket if the gender on your passport isn't available on the dropdown?

I will cover how to build forms so that they can handle these stress cases. By asking the right questions, in the right way, and supporting the full range of answers; we can build forms that collect the required information without excluding users.

I will explore these forms through a series of real-world scenarios. I'll discuss the needs of the application, how to collect the right information, store it, and display it. I will also cover how to protect vulnerable users and their privacy by noting what information should not be readily available.
 recording release: yes license: CC BY  

54. In-depth technical story: Fixing I/O performance for Windows guests in OpenStack Ceph clouds
Trent Lloyd

A rapidly scaling private OpenStack + Hybrid HDD/SSD Ceph cloud began to experience very slow I/O performance for their Windows guests - making them practically un-usable.

This is the in-depth technical story of how the issue was found and fixed including the surprising outcome that this I/O was always going to be slow on an OpenStack Ceph cloud with a large Windows guest footprint - until the fixes that were since developed are deployed at both the storage, host and guest image layer.

Spoiler Alert: The underlying reason is related to Windows guests by default aligning I/O to 512-byte boundaries but Linux and Ceph generally work best with (and usually only submit, this is key) I/O aligned to 4096-byte boundaries. The story doesn't end there though. I will go in-depth on the fixes and changes needed to Ceph, Nova, Cinder and the Windows VirtIO drivers to get everything working smoothly.
 recording release: yes license: CC BY  

55. VM block error injection, a novel approach for testing Linux storage
Tony Asleson

One of the most important characteristics of any operating system is the ability to safely store and retrieve end user data.  However, this area can be one of the most difficult things to test completely.  Hardware manufacturers strive to make their hardware as reliable as possible, so testing error paths for uncommon conditions is problematic.  For developers having a simple easy to use way to create these types of errors is vital for ensuring correctness.  Developers have tools like scsi_debug, dm-flakey, and others to exercise these error paths, but they all have limitations.

Virtual machines are ubiquitous, but people may not think of using them for purposely and deliberately creating errors for testing.  We have a opportunity to provide a test environment that allows easy testing of these infrequent error paths.  To force the kernel down read error paths, write error paths, silent data corruption detection/correction, timeouts, etc., during all times of kernel operation and even before the kernel gets loaded by the boot loader.  It's the only way to present an error that is just like actual hardware.

Attend this talk to learn about:
* Importance of testing the storage stack
* Difficulties in testing
* Different approaches in generating storage errors
* Benefits of adding testing capabilities to virtual machine vs. other approaches
* Other possibilities (block histograms, access patterns, playback)
* A preview proof of concept written for scsi-disk device in QEMU
  https://github.com/tasleson/qemu/tree/block_error_inject
 recording release: yes license: CC BY  

56. Behind (and under) the scenes of the Meson build system
Jussi Pakkanen

The Meson build system has been used for several years to build the foundations of a modern Linux userland, including projects such as systemd, X.org, GStreamer and the Mesa graphics stack. During this time we have encountered many challenges and milestones ranging from multiple distro upgrades to bootstrapping RISC-V as a whole new processor architecture.

In this talk we shall look into the many weird and wonderful/awful things that happen when dealing with the lowest layers of a modern Linux system and the things you need to consider when designing a a low level build system. We shall also look at the outcomes of these decisions and all the myriad of bizarre ways people want to build and configure their projects and how environment variables are the tool of the devil. Finally we shall try to estimate what the future shall hold for building, especially when it comes to cross-language cooperation and how these, and many other, requirements make the life of a build system developer interesting.

Including that one time we were told to rewrite Meson in Perl.
 recording release: yes license: CC BY  

57. NTFS really isn't that bad
Robert Collins

Why was rustup slow (3m30s to install (not including download time)) in early 2019 on Windows, and why isn't it slow (14s to install) now?

Early in 2019 I was developing some things in Rustlang on Windows, got frustrated at the performance of rustup (the installer for rust) and looked for bug. I found one which blamed a combination of NTFS, small files and anti-virus software for the problem. This wasn't an entirely satisfactory answer to my mind. Fast forward a bit, and with a combination of changes to rustup and to Windows itself and rustup is now pleasantly usable.... which also improved performance for rustup on NFS mounted home directories.

I'd like to share with you the story of this experience as well as the technical constraints that drove both the poor performance and the solution we put in place, which has also helped rust-doc performance.
 recording release: yes license: CC BY  

58. Building an ethical data infrastructure
Marissa Takahashi

Digitization of society has resulted in massive amount of digital data that can be collected for various purposes in both industry and academia. The increasing size and complexity of datasets and the increasing sophistication of analytical methods raise ethical questions especially as research agenda move beyond computational and natural sciences to more sensitive social aspects of human lives such as behaviour, interaction and health.

It becomes increasingly urgent to strike a balance between the benefits of big data research and the ethical implications on human subjects who generate those digital data. This talk will discuss the issues involved in building a trusted ethical data infrastructure.  These issues will be illustrated in a case study of the Digital Observatory, a data science platform in academic research.
 recording release: yes license: CC BY  

59. Introduction to Rust (for people who have never used a compiler)
Tim McNamara

Rust is often described as having a difficult learning curve. Let’s find a hidden escalator. By the end of this tutorial, you’ll find building a command-line utility in Rust to be about as easy as one that’s written in Python or Ruby, and you’ll be able to distribute it as a stand alone binary.

Many of the difficulties learning Rust emerge from two sources: jargon and novel concepts. Avoiding the jargon keeps some the cognitive capacity in reserve to focus on the new concepts. Much of jargon has its roots in Rust’s functional programming heritage and early-adoption by programming language theorists. It’s possible to bypass much of that and focus on getting something working first. There’s plenty of time to learn about affine type systems and 
monomorphisation once you’ve got a working project or two.

This practical tutorial will take you through the process of writing a command-line from scratch. We’ll learn all about the essentials of the language, as well as the tooling that’s available for people who may not have worked with a compiled language before. We’ll also spend some time talking about how  to access further help when you’re working on your own.

At this stage of its life cycle, everyone who knows Rust learned something else first. That means that there are plenty of people in your position who have been able to push through and make progress.

Come and learn Rust!
 recording release: yes license: CC BY  

60. ROS on your robot: the tale of an inside, an outside robot and 2 arms
Ben Martin

ROS, the "Robot Operating System" is more accurately described as a framework of open source software that runs on Linux and helps you build robots by combining larger "nodes" of code. ROS helps these nodes communicate and can handle (re)starting them, monitoring them, and stopping them for you. With ROS you can use existing code to convert a depth sensing camera into fake laser scan data. Then you can insert your own code to update a map showing where obstacles are located. This way you can focus on the parts of the robot that are the most interesting to you at the time and draw on a large base of existing code to enable your robot to perform complicated tasks without having to write everything. Better yet you can drop parts of the code in and out of the robot to compare how well an idea works against an existing implementation.

I will talk about ROS and about some of the robots I have built using it. There are many lessons along the way, some of which are only learnt the hard way it seems. It is always fun when your robot doesn't notice it is heading toward a wall and you pick it up off the ground only to have it stop moving and think it is magically "outside the room" on it's internal map. The books don't seem to cover the magical ghost_robot_mode=off option.

Initially I attempted to run a low degree of freedom arm using MoveIt which is the ROS arm control software. I ended up writing that code manually and found that you need at least a 6 degree of freedom (dof) arm to succeed using MoveIt. While some cheap arm kits are sold as 6 dof they aren't and you will need some tinkering to make them real before MoveIt will yield acceptable results https://www.youtube.com/watch?v=qfx0TrbOsok

I hope that some of the stories of long battles will inspire interest in using ROS and playing with robotics. I will have a few of my builds on hand including the outdoors "houndbot" with discussions about physical builds.
http://monkeyiq.blogspot.com/2018/04/my-little-robotic-pals.html
 recording release: yes license: CC BY  

61. Privacy Preserving IoT
Christopher J Biggs

Right now, the state of privacy on the Internet is "we collect every bit of data about you, crosslink everything
and use it to manipulate your attention".     The internet of Things brings the promise (threat?) that "every bit" comes to mean
not just everything you did online, but also everything you did in your home, workplace, car and bedroom.

The future is shaped by those who have the strongest vision of what it should be.   Right now that's Big Data, which 
culturally rhymes with "Big Oil", "Big Tobacco" and "Big Pharma".   If we don't want the grim meathook future they are cooking up for us, we need to visualise what we DO want  and fight harder to make it happen.

So what does a privacy-perserving future look like?    How can we construct an internet where the value of information accrues to individuals, not to billionaires?

Many of the pieces are already in place.

  * Emerging data processing algorithms such as Private Set Intersection and Homomorphic Encryption
  * Personal data enclaves such as the Hub of All Things (HAT) (hubofallthings.com)
  * Data exchanges like the Sam project (samnow.com)
  * Privacy-first IoT data networks like LoRaWAN and Amazon IoT

Join us as we fit these pieces together  and imagine what Internet life (aka "life") might look like when we wrest power back from Big Data.
 recording release: yes license: CC BY  

62. Portable, Attested, Secure Execution with Enarx
Nathaniel McCallum

So you have some code you want to run in the cloud. But it has sensitive algorithms and data. Negotiating private computing resources isolated from the rest of the cloud is expensive and time consuming; it also isn't very elastic. So what is an application developer to do?

This talk will summarize the ongoing development of the Enarx open-source project, sponsored by Red Hat. Rust uses the hardware isolation provided by Trusted Execution Environements (TEEs) to provide deployment time attestation, privacy and tamper protection at deployment time. Our long-term goal is to enable you to write your application using existing tooling but choose your execution target dynamically at deployment time.

This talk will include a demonstration of our current capabilities and a roadmap for future development.
 recording release: yes license: CC BY  

63. How to capture 100G Ethernet traffic at wire speed to local disk
Christoph Lameter

Capturing Ethernet data sounds simple. Who has not run tcpdump? But things are not that simple at 100G anymore due to restrictions on performance of the disk subsystems and the Linux operating system overhead. And why would one want to capture 100G traffic? One reason is that most of the fiber optic wide area links are being converted from 10Gbps to 100Gbps circuits and another is that it is mandatory to capture all traffic going through links in some industries due to government regulation (well governments may want to capture traffic too for other reasons). And one does not want to build a large distributed file system or a large RAID array for that. What we want is a simple 1U Pizza box that can be deployed anywhere without too much fuss where 100G lines are deployed.

This talk is going on a journey through the various hardware and software considerations to get a box like that configured and talks about numerous hard lessons learned as to what works and what did not. On the way we encounter numerous hardware and software limitations that seem to be blocking the way and that required creative solutions.
 recording release: yes license: CC BY  

64. KUnit - Unit Testing for the Linux Kernel
Brendan Higgins

KUnit[1] is a new lightweight unit testing and mocking framework for the Linux kernel. Unlike Autotest and kselftest, KUnit is a true unit testing framework; it does not require installing the kernel on a test machine or in a VM (however, KUnit still allows you to run tests on test machines or in VMs if you want) and does not require tests to be written in userspace running on a host kernel. You can read more about KUnit in this LWN article[2].

In the first half of the talk we will provide background on what unit testing is, why we think it is important for the Linux kernel, how KUnit provides a viable unit testing library implementation, and offer a brief demonstration of how it might be used.

In the second half of the talk we will talk about the future. We will talk about KUnit's roadmap, the challenges that KUnit is facing, how to structure the Linux kernel testing paradigm, and how KUnit fits into it.

[1] https://google.github.io/kunit-docs/third_party/kernel/docs/
[2] https://lwn.net/Articles/780985/
 recording release: yes license: CC BY  

65. The Ops in the Serverless
Jennifer Davis

A function is deployed and alerts go off. When our intrepid site reliability engineer responds to the change in availability, she begins the task of debugging and implementing new tests to catch the issue in future deployments. While the nature and complexity of computing changes, the need for specialized operations engineering skills only increases.

In this talk, we will examine the increased need for specialized Operations Engineering in the Age of Serverless. We’ll use the serverless platform to explore three critical areas of operational readiness of testing, monitoring, and debugging
 recording release: yes license: CC BY  

66. The Secret Life of Routers
Sachi King

Router Rooting and the Secrets of Their Daemons.

Modems and routers are a ubiquitous part of many peoples lives, and over time these devices have evolved to be more and more user friendly. Internet Service Providers (ISP) no-longer simply provide a modem, they now provide modem-router combo devices, and they manage the whole device.  A user is expected to be able to plug the device in, wait a few days for the ISP to connect it, then never have to think about it again.  This makes routers part of the Internet of Things, but what does this mean for router security, your network, and how the router is configured?

It is becoming harder to gain root access to consumer routers over the network, but this might not actually mark a positive trend in actual security for the devices.  We'll be looking at an ISP provided router, its (ob)security, what been done right, where it has gone wrong, and what exactly is running on it.  As a bonus, we'll compare this to a consumer bought device with it's manufacturer daemons.
 recording release: yes license: CC BY  

67. The EU Says The Laws of Mathematics Apply in Australia
Dan Shearer

Free software developers, network engineers and privacy advocates have been given a gift by the EU, in the form of six strongly-enforced laws based on human rights that have computer science embedded in them. This talk covers:

    • How human rights is directly linked to computer science via the legal text of these EU laws
    • What new software solutions are required by these laws
    • What long-standing bad internet security practices are banned according to the text of the law
    • Examples of code based on long-standing open source libraries that meet the requirements of the new EU laws 

On the one hand, security habits until recently classified as  “best practice” are now moved to “fix it  now or get off the internet”, which is great news for those who have been advocating for better security for years. Infrastructure providers are required to be secure. Security algorithms known to be cracked may not be used. On the other hand, the EU has introduced new concepts in software-mediated contracts between infrastructure suppliers, and a new emphasis on the privacy of the endpoints in end-to-end communications. 

In various ways, these laws affect how personal data and communications are handled outside Europe including in Australia, mandating better security and privacy. Which is just as well, because these are dark days for individual privacy in Australia.

In January 2020 the Australian Consumer Data Right bill is expected to become law, providing privacy protection to any “reasonably identifiable person, including a business enterprise”, which includes persons such as News Corp and BHP. At the same time the Data Sharing and Release Bill will be enacted, which will remove more than “500 existing data secrecy and confidentiality provisions across more than 175 different pieces of Australian Government legislation”, ensuring that companies such as News Corp and BHP have easier access to Australian citizens’ data. The Federal Court of Australia decided in 2017 that metadata is not personal data, so no doubt the Data Sharing and Release Act will indeed "streamline delivery of citizen data services" from the Australian Government to private companies.

Successive Australian prime ministers and their governments believe that “the laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.”   Not only do the laws of mathematics apply in Australia, but Australian companies wanting to do business that involves EU residents find themselves covered by EU laws, and EU laws have mathematics right at their core.

Instead of (or as well as?) feeling despair at the state of privacy in Australia, the free software community can argue that there is an economic benefit in adopting an EU rights-based approach. Australian companies who deal with EU residents must comply with EU law. Maybe we can end up using the gold standard in privacy, even in Australia.
 recording release: yes license: CC BY  

68. Open Source Won, but Software Freedom Hasn't Yet: A Guide & Commiseration Session for FOSS activists
Bradley M. Kuhn, Karen Sandler

History never unfolds as we would expect.  It's surprising and jarring that we've achieved both so much and so little.  Every day, there is more Free and Open Source Software (FOSS) in the world than ever in history, but it's also a little bit harder each day to live a life that avoids proprietary software.  Today's world of software technology is a ridiculous paradox. 

Most software that we depend on every day is under someone's else control.  Whether it's the cloud service run by a big company, the medical devices that keep us alive, or the Javascript application for everything from our banking to our social media, the code that handles our most sensitive data and life-essential computing tasks is usually proprietary.  Even Linux-based devices, which are ubiquitous, rarely comply with the GPL and therefore are more-or-less almost as proprietary as any other device.  Linux is everywhere, yet early FOSS adopters have never had less software freedom than we do today.

Once upon a time, it was viable for someone living in the industrialized world to function in daily society in pure software freedom.  In those days, being a software freedom activist was akin to being a vegan or vegetarian: activists could (relatively conveniently) live a lifestyle that reflected our values and proved our sociopolitical point in mundane, daily terms.

Leading by example is not so easy anymore.  The strongest supporters of software freedom among us, if they chose to remain living in the industrialized world, make compromises.  Our political opponents  tell us that our cause is misguided since these compromises "aren't so bad".  Meanwhile, our would-be political allies question our commitment to the cause because we carry devices with some proprietary firmwares.  Navigating this complex climate may well be the hardest challenge we face.

Cooptation is commonplace for social justice movements, and the cooption process can be ongoing for decades.  The software freedom movement is a few years into this cooption: this is precisely why we see major leaders stand up and shout "Open Source is the default; Open Source has won!" while presenting slides from a Macbook.  The most difficult days don't lie behind us; they lie ahead.

This talk is about surviving the personal struggle of software freedom activism in this current climate.  Many of us want a world with only FOSS and no proprietary software, but it's unlikely we'll liberate ourselves from proprietary software in our lifetimes.  How do we live our lives to maximal effect to carry forward the torch of software freedom both in this generation and onto the next?  How do we weather the inevitable failures and seemingly insurmountable challenges as we watch what was once FOSS slowly become proprietary again, or see new technologies exist only as proprietary, or, even worse, exist as a warped version of FOSS that "seems open" but fails to give most software freedoms to most users?  Let's learn and explore together how to survive as activists now that the going got tough.
 recording release: yes license: CC BY  

69. Room scale VR tracking with OpenHMD
Jan Schmidt

The OpenHMD project provides cross-platform support for a range of virtual reality hardware. A variety of projects can use OpenHMD for VR - like the Godot game engine, Blender and the Monado OpenXR platform.

In the 0.3.0 release, the NOLO driver became the first one to add support for room-scale tracking. Now, work is underway to support room-scale tracking with other devices like the Oculus Rift and HTC Vive.

This talk will provide an overview of VR (what is room scale tracking anyway?), the OpenHMD VR ecosystem and then take you through the technical details of what's required to make a device like the Oculus Rift work well.
 recording release: yes license: CC BY  

70. How to Write a Retro Arcade Emulator
Josh Bassett

If you were a kid during the 70s, 80s, or 90s, the chances are you spent time (and money) in your local video arcade playing games. Later, the rise of home video game consoles sadly brought an end to the golden era of video arcades – the rows of machines crafted out of plywood and vintage electronics, abandoned for the quiet comfort of our living rooms.

Not only did we lose the visceral experience of the video arcades, but we also lost our appreciation for the engineering wizardry that went into building these great machines. In this talk, Josh takes a look at the hardware engineering behind one of his favourite games of the 1980s, and shows you how to build an emulator to preserve a little piece of gaming history.
 recording release: yes license: CC BY  

71. Macro Security for your Microservices
Sreejith Anujan

Breaking down a monolithic application into atomic services offers various benefits, including better agility, better scalability and better ability to reuse services.  However, microservices also have particular security needs - 

Traffic encryption to defend against man-in-the-middle attack.
Fine grained access control and mutual TLS.
Auditing tools to identity who did what at what time!

Istio addresses the security challenges developers and operators face in a distributed microservice architecture. Istio provides strong identity, powerful policy, transparent TLS encryption, and authentication, authorization and audit (AAA) tools to protect your microservices and data. 

In this hands on tutorial session, attendees will 
1) Understand the high level architecture of Istio 
2) Custom policy enforcement to  limit traffic to a service
3) Service Traffic encryption using Mutual TLS

Takeaway: Learn how  Istio enforces security features to mitigate insider and external threats against  your data, endpoints, communication and platform , wherever you run your microservices.

Pre-requisites: 
Intermediate understanding of container technology and microservices architecture. BYOD with a modern browser and an internet connection to access cloud based labs.
 recording release: yes license: CC BY  

72. Using WhatsApp as a Command Line ( Breaking out of the walled Garden)
Tishampati Dhar

This talk describes the WhatsApp chat protocol and various endpoints where data can be unlocked from this classically end-to-end encrypted channel. Unlocking and automating the WhatsApp channel is a pre-requisite for scaling businesses in the 3rd world where this is a dominant communication medium. In the third-world and from my personal experience in Kenya a lot of businesses run on the command-line, the command-line being the WhatsApp message text entry box. This talk describes means of putting shell processor behind that command line to parse natural language, emoji and semi-structured data to capture issues from the field (stderr), information from the field (stdout) and to forward parsed data to order management, tickets and other systems (pipe,redirect).

We will cover initial unofficial attempts including:
-  Direct HTTP protocol unlocking 
-  Using Headless browsers such as PhantomJS/Puppeteer with WhatsApp for Web
-  Using Android Notifications API to forward messages to another app
-  Using rooted Android phones to uncrack the encrypted db and forward content
-  You can't stop the screenshot

Recently there are more official channels for general purpose automation and CRM / Support Chat integrations via
- Twilio
- FreshChat
- Zendesk
- Others as WhatsApp graciously grants access

We have firsthand experience with the Twilio Whatsapp API / Python SDK which should be familiar to anyone having using Twilio API before. This is layered with text comprehension advances from modern Neural Network based approaches to create effective automated customer service , order creation and field data capture solutions.

A lot of Python web development has focused on building API's  ( stateless), User-interfaces targeting the browser etc. however Chat as UI requires a stateful backend which has memory and uses context effectively to craft responses. These bots are often domain specific and little attempt is made to be Turing-passing as opposed to being functional with semi-structured input.

Semi-structured input can be flexibly parsed using mostly rule based NLP, deep learning based NLP is the flavour of the day and only becomes necessary for processing client input (from users outside the organization who cannot be effectively change managed). Tools in the classic python NLTK library for processing WhatsApp data include:
- Stemmers
- Tokenizers
- Fuzzy Matchers
- Regular expressions ( for deterministically named entities such as truck registration, containers numbers, vessel names etc.)

When push comes to shove, the data can be piped via one of the cloud NLP platforms as well. Overall the humans in the WhatsApp treadmill can be assisted by machines to systematize the flood of words.
 recording release: yes license: CC BY  

73. Playable Ads: What REALLY are they?
Evan Kohilas

Have you ever clicked the *FREE GEMS* button and been served an ad, only to find you're now trialing a game?

What are they? Are they running code? 
If they are, can we hijack them?
If we can't, can we bypass, or even replace them?

And more importantly, what are they a actually doing?
Are you really playing a game? Or are they bitcoin miners in disguise?

Come along and join my adventurous curiosity as we learn to man in the middle and reverse engineer these ads and discover what they're really about!
 recording release: yes license: CC BY  

74. Practical Ethics: building it better in 2020 and beyond
Nicola Nye

When will your personal data be hacked, leaked or compromised? If you think it already hasn't happened to you, think again. What about your parent's data? Your children's data?

We are increasingly inhabiting a digital world, scattering electronic footprints in databases near and far. We, as technologists, have a responsibility to ensure that our products are using that data for good. Treating the humans and communities we serve with respect. Behaving ethically both individually and collectively in the companies and teams we work in. There are those who would have you believe that ethics and capitalism cannot coexist, that businesses are only rewarded if they put profit first and customers last. I refuse to accept this is true.

This talk provides an introduction to what it even means to even 'be ethical', how doing so can benefit your bottom line, and explore some practical tips to build your ethical muscles so we can build the kind of future we want to have, that our children deserve to have.
 recording release: yes license: CC BY  

75. eChronos Lyrae: A 64-bit multi-core RTOS kernel for ARM and RISC-V
Ben Leslie

eChronos Lyrae is the latest kernel to be developed as part of the eChronos family of real time operating systems.

eChronos Lyrae is a simple to use real-time operating system that targets modern 64-bit multi-core hardware platforms including both ARM and RISC-V.

Multi-core provides challenges to an RTOS designer. There are important trade-offs between ease of use, implementation complexity and utilisation. This presentation will discuss some of these challenges as well as providing an overview of the kernel and where it can best be used.

There will also be some war stories; OS development isn't just the outcomes, but the hardware bugs you find along the way!
 recording release: yes license: CC BY  

76. What Lies Beneath: What are they really tracking and how?
Anne Jessel

Many people know that Facebook and other companies track what we do online. Cookies and JavaScript are complicit in allowing Facebook and others to know what we like and who our friends are. Some people accept this as part of the price of a $0 service they enjoy using. Others take care to block cookies, and reduce the amount of personal data that third parties can gather about them.

But what information is really being collected when you are using the web? Where, when and how is it being collected? Do those who agree to this data collection really know what they are handing over? And are those who don't agree having any success at protecting their personal data?

I will show you how extensive is the information Facebook openly admits to collecting (if you know where to look!), and how easy it is for Facebook to collect it without your knowledge. You will also see how third parties routinely gather your data from other websites, in many cases without the website owner realising. 

We will look at a real world example by analysing a fairly typical website of a well-known company that isn't known for its data collection, to see what sorts of things it is sending to third parties. In addition, you'll see the results of research into what those third parties use the data for.

Finally I will discuss what you can do if you wish to better protect your personal data, and some of the problems you may face, including why deleting your cookies may be counter-productive.
 recording release: yes license: CC BY  

77. Large Pages in Linux
Matthew Wilcox

Since 2002, Linux has used huge pages to improve CPU performance.  Originally, huge pages supported 2MB pages on x86.  They evolved to support other architectures and, eventually, 1GB pages on x86.  Despite this relative success, the huge page mechanism is not flexible enough to support related hardware features.  One desirable feature is a "medium" large page size (e.g., ARM CPUs support a 64kB page size).  Another is a larger page size (e.g., some network devices support pages as large as 2GB).

In this talk, I will argue that using larger pages to reduce software overhead is as important as enabling hardware optimisations.  I'll talk about the recent patches to improve the performance of larger pages in the page cache.  I'll also talk about patches to bring support for larger pages to normal filesystems.  And I'll talk about some of the downsides of using larger pages, and some of the future limitations of using larger pages.

This talk is for kernel developers and those who are interested in learning more about how some hardware works.  Since these optimisations are supposed to be transparent to user space, no changes should be needed to userspace code to take advantage of them.  End users should only notice their web browsers running faster, their database queries completing faster and their birds being slightly less angry.
 recording release: yes license: CC BY  

78. Advanced Stream Processing on the Edge
Eduardo Silva

Logging is one of the ancient mechanism behavior to perform application or hardware analysis. In a new era of distributed systems at scale and connected embedded devices, data collection and processing becomes a real challenge; Logging has been forced to evolve and adapt to new needs. 

In Data Analysis, logging is one of the key components to collect and pre-process data, usually, a logging mechanism goes through collect, parse, filter and centralize logs to a storage backend like a database, so data processing and analysis can be performed. This usually happens after the data has been aggregated and stored, but for real-time analysis needs, process the data while is still in motion brings a lot of advantages and this kind of approach is called Stream Processing.
 
What if it was possible to query your data using aggregation functions, windowing, and grouping results while the data was in motion and in-memory but on the edge side?.

In this presentation, we will go further and present an extended approach called 'Stream Processing on the Edge', where data is processed on the edge service or device, in a lightweight mode empowering features like anomaly detection (in the order of milliseconds) and Machine Learning in a distributed way using pure Open Source software.
 recording release: yes license: CC BY  

79. Everything you know is wrong: why using big words can made you sound stupid
Lana Brindley

There's a writing style people use when they want to sound smarter. It goes something like this:

We're using iterative approaches to corporate strategy to foster collaborative thinking and further the overall value proposition.

Depending on which reading test you use, you need to have a grade 20 equivalent reading age to be able to understand that sentence. Many doctoral candidates haven't even been to school for twenty years. So why do we write like this?

There is some evidence (yes, actual scientifically sound, peer-reviewed evidence), that indicates that people who show signs of power are treated in a way that allows them to actually achieve such power. In other words, 'fake it til you make it' is an actual, scientifically proven way to get what you want. 

But, if you're writing technical documentation, or an email to your boss, it's more important that you're seen as being honest rather than powerful. It would be nice to think that people will read your writing and think "wow, what a clever writer that Lana Brindley is! She knows lots of fancy words, and I think I'd like to be her friend". Sadly, if your writing is full of jargon, they're more likely to think "what a silly twit" and go read something else.

In this talk, Lana will go through some of the ways you can make your writing clearer, more engaging, and honest, without using too many big words.
 recording release: yes license: CC BY  

80. The Fight to Keep the Watchers at Bay
Mark Nottingham

Many of the Internet's protocols were designed at a time when no one cared who was watching. That's no longer true, and so the Internet community has put a tremendous amount of effort into making communication between two endpoints *only* between those two endpoints.

This talk will recap what's happened so far, explain what's left to do, and explore the larger context, including legal issues, architectural impact, open questions, and limitations.
 recording release: yes license: CC BY  

81. Behind the scenes of an ELK system
Rafael Martinez Guerrero

Behind every security measure you take, you should have an information management system helping you take decisions.

If you work with security, you need a way to collect, process, save and analyze huge amounts of data that should be used to control how your systems are behaving, find anomalies and evaluate the results of your actions.

Have you ever wondered how to manage billions of logs and metrics from thousands of devices in your infrastructure? If you need high-availability and a resilient and stable system to process your data this is the tutorial for you. 

Based on the experience obtained in the past 4 years at the University of Oslo processing billions of logs a day from more than 15000 devices, this tutorial will give some inside information and many tips about how to achieve this with Linux and open source software.

You will learn how to put together HAProxy, agents, Logstash, Elasticsearch and RabbitMQ to work at scale. You will also hear about the problems and pitfalls we have experienced during these years and what we learned from them.
 recording release: yes license: CC BY  

82. TPM based attestation - how can we use it for good?
Matthew Garrett

Systems with a Trusted Platform Module generate a cryptographically verifiable event log of every component of the boot process. They can then provide a signed quote of this log in order to prove to a remote site that they booted the expected software. In the early 2000s we were concerned about that resulting in websites that would refuse to grant you access unless you were running an unmodified proprietary operating system, but for various reasons that turned out to not be a problem in the real world. Some years later, how can we use this attestation data for the power of good?

This presentation will describe the functionality of TPMs and how the event log is generated, and describe techniques for making use of TPMs to protect access to network resources, solve the problem of trusting SSH host keys in enterprise environments and make it easier for people to recover their systems while on the road. It will include demonstrations of using newly released open source software to build novel attestation solutions for protecting end users without giving up privacy or control.
 recording release: yes license: CC BY  

83. Zero Trust SSH
Jeremy Stott

SSH certificates are an under-utilised feature of OpenSSH, but they offer a fantastic method to solve some pain points of growing teams and growing infrastructure. You don't need to manage complicated directories to live on this greener side of the fence.

Hosts only trust a single public key of a trusted certificate authority instead of keys from every developer (and let's be honest, several who are no longer working at your company :uhoh:). SSH certificates expire (this is good), and can also tell SSH what you can or can't do with your session. The can even help mint a new user on a brand new trusting host. And if you need to use sudo, don't worry your certificate's got your back too.

How do you get short lived SSH certificates from a self service certificate authority? Grab your identity on the cli using some nifty OAuth2 in your browser, swap this identity to get temporary AWS credentials, invoke a lambda function, sign a public key, and you're on your merry way.

Open source tools are all over this problem. Let's combine some that have been around forever, and some brand new ones into an awesome solution.
 recording release: yes license: CC BY  

84. The magical fantasy land of Linux kernel testing
Russell Currey

The Linux kernel does a lot of stuff, and runs on a lot of stuff.  I'm sure we can all agree that this is a good thing, however the matrix of stuff it does and stuff it runs on continues to get bigger and bigger!  With thousands of commits each release and a widely distributed and decentralised developer community, how do we make sure that the kernel still works on everything, does everything it's supposed to do, and hasn't slowed anything down in the process?

In this session we're going to be looking at the huge variety of automated kernel testing projects to figure out what's going on, covering a variety of different areas, including:

- per-patch CI to quickly test if a developer broke something,
- built-in kernel selftests and the push for more unit testing,
- performance testing of the kernel itself and userspace,
- regression testing, especially for known security issues,
- hardware testing, from enormous 512TB machines to huge farms of small SOCs.

By understanding the huge web of projects out there, hopefully we can figure out how we could get more stuff done more effectively.  It's a difficult problem in the broad and uncoordinated space of Linux kernel development, but it's all in pursuit of the dream: 

the magical fantasy land - with no duplication of code or effort, where everything is tested, where everyone knows where everything is, and where bugs are never introduced again.
 recording release: yes license: CC BY  

85. Privacy, Security, Convenience; when it comes to home automation, can we pick all three?
Ben Dechrai

Most of today's home automation products rely heavily on cloud services. This allows us to manage and control our homes from anywhere in the world, by placing the configuration and logic processing in a publicly accessible location and avoiding opening our home network.

But as we know, the cloud is just someone else's computer, that we have to trust. If they are the arbiter of what happens in your home, you are not truly in control.

This presentation discusses end-to-end encryption, secure claims, network firewalls and segmentation, and a smattering of zero-knowledge theory. You'll hear some of the available options for resolving the connectivity issues and even taking some home-automation devices off-line altogether, without weakening your home network or losing the ability to verify the validity of all operation requests within.

As part of an ongoing project to bring these theories to life, this talk includes a live demo of a custom-built garden irrigation setup, featuring genuine H20.
 recording release: yes license: CC BY  

86. Securing firmware: Secure and Trusted boot in OpenBMC
Joel Stanley

The OpenBMC project has brought modern Linux technologies to the firmware in your new server. A missing piece of this is ensuring the firmware is the image you expect it to be running, weather that is something your vendor shipped, an update, or something you build yourself from the open source project.

The next generation of BMC hardware will allow a hardware root of trust to secure the entire boot chain. Come hear about how that works, and how the design goes to great lengths to ensure user freedoms to replace firmware while still being secure are preserved. This talk will coverTPMs, EEPROMs,  keys, and signing, from a firmware perspective.
 recording release: yes license: CC BY  

87. Building a Compiler for Quantum Computers
Matthew Treinish

Just as with classical computers we need tools to convert the programs we write into something that can actually be run on computers. For classical computers this normally involves converting a higher level language into machine code, but with quantum computing the programs are written at a much lower level, the equivalent of assembly code. However, because of limitations with the current quantum computers available today even programs written at this low a level have to be adapted and optimized for each specific backend to be able to run successfully. Making this compilation processes effective and efficient directly impacts how a program will perform, and whether you're able to get a meaningful result or not.

This talk will explain what is involved in compiling software to run on a quantum computer and why it is necessary. It will cover how it works, different optimization techniques that are available, and how it can effect the results from running your program. It will also cover how you can customize the compiler optimizations used to try and better optimize your program to get better results, and how a bad compiler output can result in getting a noisy result, or even lead to you getting no meaningful result at all.
 recording release: yes license: CC BY  

88. Affordable Custom Input Devices
Jonathan Oxer, Chris Fryer

Learn how Open Source software and hardware can be used to build a custom “button box” which can be adapted to suit the needs of an individual, and allow them to control a computer by acting as a keyboard, mouse, or game controller. Then through their computer, they can control their world.

Physical disabilities can take many different forms. Everyone has a unique body and needs, but medical devices are extremely expensive to develop due to the overhead of regulatory compliance. The economies of scale that can be achieved with mass market consumer goods just don’t come into play when it comes to devices designed to help with specific physical problems.

There is no “one size fits all”, and in many cases it’s necessary to design and build one-off devices to suit a specific individual. With traditional approaches this is prohibitively expensive.

Open Source technologies including Arduino and 3D printing have opened the door to low-cost DIY solutions that can be customised to suit the individual.

For some people with disabilities it can be much easier to navigate in the virtual world than in the physical world. Co-presenter Chris has Duchenne muscular dystrophy and when he’s not using his computer he is very limited in his ability to interact with his physical environment, but he can do almost anything on his computer with the use of custom-built input devices.

Combining these custom input devices with computers acting as his intermediaries, we have worked together on projects that allow him to reach out and control his physical environment in a way that hasn’t been possible for most of his life.
 recording release: yes license: CC BY  

89. RFC 1984: Or why you should start worrying about encryption backdoors and mass data collection
Esther Payne

In 1996 Brian E. Carpenter of IAB and Fred Baker of IETF wrote a co-statement on cryptographic technology and the internet.  This RFC wasn't a request for a technical standard, it was a statement on their concerns about Governments trying to restrict or interfere with cryptography. They felt that there was a need to offer "All Internet Users an adequate degree of privacy"

Since that time successive governments around the world have sought to build back doors into encrypted apps and services to access more citizen and visitor data.  As of July 2019, the AG of the United States William Barr stated: “Some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety,” i.e For security Americans should accept weakened encryption.  The head of the FBI also claimed that weakened encryption wouldn't break it.  

In Australia the metadata retention laws have been abused against journalists with 58 searches carried out by the AFP.  In 2015 ACT police carried out 115 metadata searches.  UK officials have a cavalier attitude to the EU SIS database which tracks undocumented migrants, missing people, stolen cars, or suspected criminals.  

IETF Session 105 mentioned privacy and concerns with the mass collection of data. While the IAB and IESG were worried about US export controls on cryptography there is an argument for RFC 1984 to be updated to include the unnecessary mass collection of data and to use it as a term for IT professionals, privacy advocates and the public to rally behind. 

In this talk let's recount a brief history of governments around the world wanting to weaken encryption as RFC 1984 warned us about.  

We live in a time where citizens put data into commercial, healthcare and Government systems to access services, some services are only accessible online. From CCTV to Facebook people have little understanding of why mass collection of data is dangerous. There is little scrutiny of who can access that data, from Scotland to the US. 

Open Surveillance is only a small part of the picture when profiling citizens.  It still counts as personal data, when combined with metadata and the actual data that people put into social media and services like ancestor DNA test kits.  Businesses who use CCTV have to put up signs to warn the public they are recording.  So called anonymized data still contains identifiers that can tie to individuals.  

Let's talk about Ovid and peacocks. Let's explore how to expand the RFC to cover recent developments in surveillance capitalism with governments accessing that data, but not securing it. We need to make it clear weakened encryption, the mass collection and careless retention of data isn't acceptable. We need to update and implement RFC 1984.
 recording release: yes license: CC BY  

90. smbcmp: A new tool to diff network captures
Aurélien Aptel

While debugging network protocol issues we often have to look at network captures. Wireshark is an excellent tool to make and analyze network captures which we rely on. A common scenario is to compare a capture of a "working" case and a "failing" case, possibly made by different client/server implementation.
But when you are looking at hundreds of packets holding hundreds of fields it quickly becomes problematic.

To help with this problem I have come up with a new open source tool that reuses Wireshark and allows you to look at captures side-by-side and diff packet details similar to a diff for source code. This talk will cover how the tool works, present more advanced features and how I personally use it for my work the Linux SMB client.
 recording release: yes license: CC BY  

91. Professional quality layout design with Scribus
Kathy Reid

Scribus is an open source desktop publishing tool, akin to Publisher or InDesign. It is a powerful, complex application that produces beautiful, professional quality layout designs, suitable for printing or PDFing. 

Scribus has a steep learning curve, and this can stop people from wanting to learn more about them. 

In this 100-minute tutorial, we will cover

* Document setup, page sizes and margins
* Concepts of grouping, alignment and distribution
* Layout elements such as images, shapes and layering
* Working with text using character and paragraph styles, and a primer on fonts
* Colours for printing, and using the colour palette in Scribus
* Exporting to PDF and considerations for PDF such as file size reduction

Prerequisites: 

* Ensure that you have Scribus installed on your laptop prior to attending the tutorial
 recording release: yes license: CC BY  

92. Transpile anything to everything!
Anna Herlihy

Compass, the UI for MongoDB, is an Electron app that allows developers to visually develop aggregations and queries for their database. Right now it accepts these queries in the MongoDB Shell syntax, a JavaScript-based query language. However, developers use a wide range of programming languages in their apps, and constant context switching between languages can be painful. To cure this pain, we wanted to allow users to export the queries they built in Compass into whatever programming language they wanted. Even better, we wanted to also allow users to write their favorite language directly into Compass. To achieve these goals, we needed a way to translate query syntax in any programming language into query syntax in any other language, so we needed to write a multi-language-input to multi-language-output transpiler in web-friendly JavaScript!

Further, since MongoDB has so many diverse and passionate language communities, I really wanted the transpiler to be “pluggable” - community members should be able to add their favorite language to Compass without needing to be compiler experts or know about what other languages were implemented. It is not enough to simply open source the code and hope people contribute. The compiler was architected with distributed collaboration in mind from the very start, and this talk will describe all the steps we took to make the barrier to contribution as low as possible.

This talk will go through the technical design of the anything-to-anything pluggable transpiler and teach attendees how they can add their own favorite language to Compass. I’ll talk about classic compiler design principles and how I leveraged various compiler technologies to create a dynamic, extensible transpiler. Lastly, I’ll talk about how we can take this transpiler and apply it to an abundance of other use cases!

Who should attend?

Anyone with an interest in open source, compilers, parsers, Compass, MongoDB, ANTLR, or a general passion for complex technical problems is welcome. I will talk about classic compiler design without requiring attendees to be programming-language experts, although any knowledge of compiler implementations will be useful. Attendees will leave the talk knowing exactly how to extend Compass to support new languages and will hopefully be inspired to go out and add their favorite language to Compass!

Why should they attend?

This talk addresses both MongoDB-specific challenges as well as highly technical computer science problems. Attendees will learn not only about Compass itself, but about compiler design. As developers we use compilers every day, but it is not so often that we get the opportunity to actually write them, and compilers are awesome!
 recording release: yes license: CC BY  

93. The life of open source spatial with QGIS - From hobby to grown up, with bonus growing pains
Nathan Woodrow

QGIS started back in 2002 as a simple hobby project with only a handful of developers and a small user base. Since then, it has grown into one of the most popular cross-platform open source spatial desktop tools available, with an ever-growing developer and user base, widely used in many sectors, even as a full replacement for commercial offerings.  This growth has not come without cost or growing pains to the project and community.  A growing user base and an ongoing effort to be taken as a serious alternative to commercial offerings have led to a shift in developer and user expectations for the project.  

As QGIS has grown into areas and user bases the early developers never dreamed of, some of the feelings the project had have changed, and this might have zapped some of the fun.  Was this inevitable, as we pushed the project with more and more features and promotion in the spatial community?  How do you maintain the same feel for the project, while at the same time becoming more serious?

What about the users and the community?  Have their expectations now changed for the project? At what point did we notice a change in the community and the levels of service we were required to live up to?

Long-term-release, better build process, better documentation, UI translations into many languages; these all make great software, but at a cost for a mostly volunteer-run project. When does the project switch from pure volunteer to a more commercial entity?
 recording release: yes license: CC BY  

94. "Write a single library to handle all input devices, it'll be easy" they said...
Peter Hutterer

Six or so years ago, input devices in userspace were handled by a set of different modules, all with their own properties and behaviours. Where a device didn't work as expected, it was largely up to the users to find the right forum with examples that actually work. This had worked "well" for about a decade or two.

Then, largely driven by the promise of the differently-shaded pastures of Wayland, a new library was born: libinput. The prime motivation behind this library was to have a  unified input stack that works well out of the box for any device, regardless of the display server. libinput is now the input backend for all major Wayland compositors and the default X.org input driver.

This talk goes through the motivations behind libinput and its design choices. Why and how is it different to what we had before? Why can we handle mice, touchpads, tablets, touchscreens but not joysticks? Or the weird but common question: Why are there no configuration options? (Spoiler alert: there are quite a few.) The talk will explain how some of the devices work, how we handle them and  why certain behaviours are required and/or at last need to be worked around. I will explain the various current and future features and our plans to improve them. And where we went wrong. Because if it wasn't for the error part of "trial and error", everyone would think that we know what we're doing.

This talk is about technical details, but intended to be accessible to everyone. You won't need to know programming to understand it, but you'll probably leave knowing more about devices than you want to know.
 recording release: yes license: CC BY  

95. Betrusted: Better Security Through Physical Partitioning
bunnie, Sean "xobs" Cross, Tom Marble

The condensation of virtually everything into a single device -- the smartphone -- has normalized deviant behaviors that create security risks. For example, many smartphone users conduct secure transactions while juggling several other apps, thus creating opportunities for adversaries to exploit human error. Furthermore, running both secure and insecure code on a common CPU increases the risk of exposing user secrets thanks to microarchitectural side channels --  a large, complex, and opaque attack surface.

System architects have introduced "secure enclaves" as a technique to minimize the attack surface between sensitive secrets and an untrusted CPU. In theory, secret key material never leaves the perimeter of the enclave – keys are generated and stored permanently within the enclave. Regardless of the implementation details, secure enclaves inevitably rely on an untrusted CPU to relay messages to the user. This is because there is typically just one screen and keyboard presented to the user, and these elements are directly connected to the untrusted CPU. Thus, secure enclaves can only protect keys from being compromised; they cannot protect the data itself from compromise. 

This talk introduces Betrusted, a device designed to partition a set of secure applications into a physically separate device that is designed using security-first principles: the hardware is simple, open source, and is user-verifiable from the keyboard to the LCD. Putting secure apps on a separate screen also helps users focus on their secure transactions, while minimizing attack surfaces and eliminating microarchitectural sidechannels. The Betrusted project’s scope will eventually range from secure silicon all the way to application layer code, and we are looking for developers of all stripes who are interested in contributing to the project.
 recording release: yes license: CC BY  

96. Senseless - environmental sensing without additional hardware
Kim Burgess

Your workplace probably watches you. Every minute. Every day. Let’s see how.

Modern environments are equipped with many, often invisible ways that allow sensing of the people within, their actions or potential interaction intent. Importantly, many of these techniques are possible without additional hardware, involving explicit interaction, or notable changes to how you may already interface with a physical space. In isolation, many of these inputs do not provide significant information; however in aggregate can create a detailed context for an environment and the people within it.

This session is a brief tour of common sensing techniques and highlights some information we all create as inhabitants of these spaces.
 recording release: yes license: CC BY  

97. Tensorflow on open source GPUs
David Airlie

One of the biggest uses for GPU compute is AI/Machine Learning applications. The tensorflow library from Google is one of the most used frameworks in the AI/ML area. To deploy tensorflow on GPUs currently, the closed source nvidia stack is required using CUDA. This talk will explore the work done and left to do to enable a tensorflow deployment on open source Mesa drivers. The use of SYCL via LLVM and OpenCL along with work towards enabling OpenCL on a broader range of hardware will be discussed.
 recording release: yes license: CC BY  

98. Privacy and Decentralisation with Multicast
Brett Sheffield

Written in 2001, RFC 3170 states: "IP Multicast will play a prominent role on the Internet in the coming years.  It is a requirement, not an option, if the Internet is going to scale.  Multicast allows application developers to add more functionality without significantly impacting the network."

Nearly two decades later, multicast is still largely ignored and misunderstood.

There are many common misconceptions about multicast, including that it is only useful for streaming video and audio.  It does so much more than that.

Multicast is really about group communication.  It is, by definition, the most efficient way to distribute data to groups of nodes.

This talk explains why multicast is the missing piece in the decentralisation puzzle, how multicast can help the Internet continue to scale, better protect our privacy, solve IOT problems and make polar bears happier at the same time.

Multicast brings with it a very different way of thinking about distributed systems, and what is possible on the Internet.  From database replication to chatops, server federation, configuration management and monitoring.

Even applications, such as chat, which are fundamentally multicast in nature are being built on top of unicast protocols.  There is a Better Way.

Multicast lets us do things that would be impossible with unicast.  Imagine sending software updates to a billion IOT nodes simultaeneously, using just one tiny virtual server.

At a time when even the web is moving to UDP with HTTP/3 and WebRTC, it is time we took a serious look at what we're missing by not using multicast at the network layer to underpin our Internet protocols.

We'll discuss how you can start using multicast in your project today, and how multicast design and thinking differs from unicast.  We'll cover some of the different types of IP multicast, the basics of multicast routing and how to build in TCP-like reliability.

We'll explore what can be done with multicast now, and look forward to how improvements in multicast can make a better Internet for the future.
 recording release: yes license: CC BY  

99. New Phone, Who Dis?: Human Authentication in the Digital Age
Yaakov

When proving somebody’s identity, it’s usually an important matter and critical to get right. With digital licences springing up around the globe, including here in Australia, how can we be sure that the computer is telling the truth? Does digitising the process actually improve it?

In 2015, the NSW Government announced a commitment to providing digital licences so that people can identify themselves using their smartphones. After a limited trial in 2018 and 2019, the system is supposed to go live to users across the state some time soon. Other countries are trialling similar systems, and South Australia already has one.

The way this is presented it is largely as a black box, where ˚✧₊⁎ magic happens ⁎⁺˳✧༚ and your identity is somehow proven. For many people, particularly tech-savvy folk, magic is not a sufficient explanation, nor a basis for trust.

Using the NSW digital licence system and associated app, this talk will:

- have a look at authentication, authorisation and identity in the physical realm
- investigate differences between real-world identity and digital identities
- explore the inner workings of the New South Wales digital driver’s licence system, based upon reverse-engineering
- discuss why you should - or shouldn’t - trust digital licensing systems, and how it impacts identity verification in your own lives
 recording release: yes license: CC BY  

100. How to make kernel and user space CI for input devices?
Benjamin Tissoires

Making sure that a commit in the kernel doesn't break a mouse, a touchpad or a space bar is hard. Ideally, we need to run this commit, and all versions of it against every possible device. Rinse wash repeat for the user-space commits, because there is this one guy that uses the CPU overheating when long pressing the space bar as an indication to emacs to send a control key event (xkcd 1172).

But the universe hasn't provided us an army of people to test the devices, an infinite amount of resources and a lot of time to spare. So making true CI on actual physical devices is hard. That's a pity, really. Luckily, we have computers, and that's a start.

In this talk we will show how we moved from basically no regression tests 10 years ago to a state where now every commit gets tested against a good amount of various devices. We will show how we can do CI on the kernel side, and how we can do CI on the user space side.
 recording release: yes license: CC BY  



Location
--------
Room 6


About the group
---------------
linux.conf.au is a conference about the Linux operating system, and all aspects of the thriving ecosystem of Free and Open Source Software that has grown up around it. Run since 1999, in a different Australian or New Zealand city each year, by a team of local volunteers, LCA invites more than 500 people to learn from the people who shape the future of Open Source. For more information on the conference see https://linux.conf.au/