pre-release: PyOhio meeting announcement

Please take a moment to review your details and reply with OK or edits.
Subject and below is what will go out and also will be used to title the videos.

Subject: 
ANN: PyOhio at Cartoon 1 Sat July 27, 9p


PyOhio
=========================
When: 9 AM Saturday July 27, 2019
Where: Cartoon 1

https://www.pyohio.org/2019/schedule/

Topics
------
1. Saturday Welcome
Dave Forgac

 

Welcome to PyOhio! Important information and a brief overview of the conference.
 recording release: yes license: youtube  

2. Changing Lives through Open Source, Passion and Mentoring
Kattni Rembor

In trying to learn Python, I stumbled into a passion I had never considered. My path began with learning Python on hardware. Through mentorship and the help of friends, I began to flourish. Since then I have continued to contribute in ways I never thought possible, between code, community, and becoming a mentor myself. This is the story of my journey and how mentorship can change lives.
 recording release: yes license: youtube  

3. “Who’d I Lend That Book To?” Hard Questions Answered with Python
Daniel Lindeman

The Internet of Things (IoT) is here to stay, but getting started can seem intimidating. Inspired by the magical checkout process at my local library, I began the journey of building my own IoT book lending application. Along the way, I discovered that Python was consistently able to make the impossible seem approachable!

We’ll start with a base installation of Raspbian on a Raspberry Pi and find that Python is already there waiting to help. Then we’ll hook up an RFID reader to our Pi, and see that even though it has wires and pins, it’s nothing to be scared of. With the RFID reader and the fantastic MFRC522-python library, we’re able to read and write data to RFID stickers, neat!

We could stop here, but we’ve got Python, so let’s hook it up to a Flask app and end at a complete book lending application. I hope attendees will gain an appreciation for the technology all around them, their local library, and how powerful Python is. I hope to demystify working with hardware and dispel some perceived barriers to entry for IoT applications.

I love reading books, but I love lending them out even more! In order to keep track of my personal library, a daunting task indeed, I’ve employed Python, a Raspberry Pi, and an RFID reader. Take a tour through what it’s like working with hardware, Python, and putting it all together into a useful web application. This is a beginner friendly talk, so don’t worry if you’ve never worked with hardware
 recording release: yes license: youtube  

4. Becoming a Better Curator of Your Code
Ian Zelikman

We will start the talk with an introduction to the role of a curator and how it applies to software engineers.

In this talk we will discuss some principals and techniques that enable us to produce better code but applying them with the curation mindset of maintaining  code quality beyond producing the next bug free feature.

Some of the topics we will cover:

* Promote the use of conventions and style guides for your team 
* Code readability matters
* The first goal of writing tests should be to document the code functionality
* Encourage discussion on feature implementation
* Encourage discussion during code reviews
* Boy Scout rule - leave the code in a better shape than which you found it
* Embrace new and mature technologies, and try to incorporate them into your code when applicable

At the end of the talk we will review the software curation mindset and how you can bring it to your organization.

Writing code that functions correctly is only part of the development process. The majority of our time is spent reading, maintaining and refactoring our code.

In this talk we will discuss how when we see our work as code curation we actually enable our job to be much easier and productive.
 recording release: yes license: youtube  

5. Big Data with Small Computers: Building a Hadoop Cluster with Raspberry Pis
Alexandria Kalika

1. Details of the hardware set up - Raspberry Pis and network set up to create a functional cluster. 
2. Installing Hadoop, Yarn, HDFS, Spark etc and different software.
3. Different data sets and how to use your newly built cluster to analyze data. 
4. Using powerful Spark technologies to quickly analyze datasets. 
5. Overview of open source technologies in creating a personal, powerful data cluster.

The Hadoop ecosystem created a wide array of amazing tools and technologies that made processing of large amounts of data easier and more fun. In this talk I will go through how to use Raspberry Pi 2s to create a distributed cluster worthy of interesting data analysis. I will use Apache Spark and other open source, easy to obtain software and hardware for data insights.
 recording release: yes license: youtube  

6. Demystifying Machine Learning
Nikola Novakovic

We’ll explain basic concepts like linear algebra and loss functions, figure out when to use machine learning and build an ML model that we’ll be able to use in real world apps. Here’s an in-depth list of what we’ll cover:

* What Machine Learning is and where it’s being used
* How to recognize when machine learning is necessary
* Math & Statistics 101
* Algorithm 1: Linear Regression
* Live Coding Session Salary Estimator
* Q & A

Machine Learning is something you'll see referenced very frequently now in everything from marketing materials to sales pitches, and job postings. With so much hype it can be hard to distinguish what people mean when they say Machine Learning. In this talk we will demystify Machine Learning by understanding its core concepts and applying that knowledge to real world examples.
 recording release: yes license: youtube  

7. Enough Python to Fake It
Catherine Devlin

You came to a conference for a programming language you don't know.
Good for you!  Your courage and curiosity will pay off.  Let's teach
you enough Python to get you started, and enough Python concepts to
help you understand the PyOhio goodness you're about to witness.

We'll devote about an hour to hands-on learning of the basics of
writing Python programs, and an hour to understanding more advanced
ideas and terminology in general terms.  Prepare for creative analogies
and physical demonstrations that may or may not involve interpretive
dance.

Suitable for non-programmers as well as programmers who don't know
Python.  We'll tune the pace on the fly to the needs of those who
actually attend, but if there's a mix, the least experienced will
get highest priority.

Bring a laptop!

A hands-on introduction to the basics of Python programming, plus a high-level,
conceptual, hand-wavey introduction to the intermediate and advanced Python
topics you'll be hearing about all weekend.  You won't become a full-fledged
Python programmer, but you'll learn enough to get through PyOhio productively,
and you'll know how to continue your own Python education afterward.
 recording release: yes license: youtube  

8. Adopt-a-pytest
Dane Hillard

## Who

This is for anyone currently using `unittest` for Python unit testing that would like to adopt `pytest`.

## Takeaways

* How to run `pytest`
* How to create a basic `pytest` configuration
* Using `pytest` marks to shim an existing project
* Converting a `unittest` test to `pytest`

## What

With its simplified syntax, powerful fixture behaviors, detailed test reports, and plugin-based architecture, `pytest` has a lot to offer. Whether you're new to Python unit testing or you've been using `unittest` for a while, `pytest` may be something to consider. It's not too hard to get up and running with `pytest` on a fresh project, but how can you retrofit an existing project without having to refactor the world all at once?

pytest is a testing framework that makes writing and running Python tests simpler. Adopting new tooling in a large system is often a burden. How can you introduce pytest gradually with minimal pain?
 recording release: yes license: youtube  

9. Feature Engineering: An Apprentice’s Guide to the “Dark Art” of Machine Learning
Deborah Diller Harris

What is feature engineering and why do we need it?  When is it applied? Is it an art or a science? Find out the answers to these questions and more as we explore different methods of feature engineering with practical examples provided. There are three main methods of feature engineering: adjusting raw features, combining raw features and decomposing raw features into usable subsets. We will use datasets to illustrate binning, encoding, binaries, summing, differencing, feature scaling, extraction, and the manipulation of date/time features.  Finally, we will explore the performance of a machine learning model before and after feature engineering is applied. As a postscript, current automated feature engineering tools for Python will be introduced.

Why is feature engineering considered the "dark art" of machine learning? Transforming raw data into a form that your machine learning algorithm can utilize seems mysterious and downright frightening! Bring your wizard hat and join me as this machine learning apprentice shares her personal book of feature engineering incantations.
 recording release: yes license: youtube  

10. Lessons from Zero-Defect Software
Jason R. Coombs

You know that feeling when you look at a piece of code you or someone has written and it has a smell, it's inelegant, or its incomprehensibly complex. And then there's the other feeling, when you see a piece of code that's comprehensible, elegant, and it is ready to adopt the behavior you seek. It's this feeling we want to replicate and enhance. Instigated by a simple tweet, the speaker reaches back in time to explore the foundational practices that lead to our best code.

Starting with Refactoring, we'll reflect on the techniques of change that retain stability while increasing sophistication or reducing complexity. We'll explore how code is a form of conversation and ways that conversation can transpire in a code repository.

Next we will explore how Python has supported the principles and primitives of functional programming from early versions and how the constraints of functional programming lead to robust logic. We'll examine the functional nature of comprehensions and the powerful feature of functions as parameters.

In the main event, the speaker will draw on his early experiences with Zero-Defect Software, where one writes software with literally no bugs, and how these techniques can influence the design and implementation toward a more robust solution, starting with a rigorous but impractical ideal and distilling from that a pragmatic approach that retains much of the benefit of the technique. Integrating the lessons from refactoring and functional programming, a coding approach emerges that promises to enable and empower your development.

Writing software with no defects is extremely difficult and expensive, but the lessons learned from such ambitious projects can inform our approach for a more practical development technique. This talk looks at how principles from zero-defect engineering, functional programming, and refactoring come together to produce robust, readable, and reliable code.
 recording release: yes license: youtube  

11. Distributed Deep Neural Network Training using MPI on Python
Arpan Jain, Kawthar Shafie Khorassani

Deep learning models are a subset of machine learning models and algorithms which are designed to induce Artificial Intelligence in computers. The rise of deep learning can be attributed to the presence of large datasets and growing computational power. Deep learning models are used in face recognition, speech recognition, and many other applications. TensorFlow is a popular deep learning framework for python used to implement and train Deep Neural Networks (DNNs). Message Passing Interface (MPI) is a programming paradigm, often used in parallel applications, that allows processes to communicate with each other. Horovod provides an interface in python to couple DNN written using TensorFlow and MPI to train DNNs in less amount of time using the distributed training approach. MPI functions are optimized to provide multiple communication routines including point-to-point and collective communication. Point-to-point communication refers to a communication pattern that involves a sender process and a receiver process while collective communication involves a group of processes exchanging messages. In particular, the reduction is a collective function widely used in deep learning models to perform group operations. In this talk, we intend to demonstrate the challenges and elements to consider for DNN training using MPI in Python.

Deep Learning(DL) has attracted a lot of attention in recent years, and python has been the front runner language when it comes to the framework and implementation. Training of DL models remains a challenge as it requires a huge amount of time and computational resources. We will discuss the distributed training of the Deep Neural Network using the MPI across multiple GPUs or CPUs.
 recording release: yes license: youtube  

12. How to Write Pytest Plugins
Darlene Wong

Pytest is a widely-used, full-featured Python testing tool that helps you write better programs.  Whether you have been using Pytest for years or are just getting started, you may find features of Pytest that you would like to modify or customize for your own environment or specific use cases.  Did you know that you can easily enhance and customize Pytest through the use of plugins?  In this talk, you will learn all about some of the useful Pytest plugins that are available, and learn how to create your own plugins.  We will walk through the plugin creation process by creating a plugin to upload Pytest reports to a Google Cloud Storage bucket.

Pytest is a widely-used, full-featured Python testing tool that helps you write better programs.  Did you know that you can easily enhance and customize Pytest through the use of plugins?  In this talk, you will learn all about some of the useful Pytest plugins that are available, and learn how to create your own plugins.
 recording release: yes license: youtube  

13. A Brief History of Fire Brigades
Jon Banafato

The history of fire companies dates back millennia, but their current form is relatively new, just a few hundred years old. The evolution of these companies happened in parallel in different nations, but I’d like to tell the story of how London’s fire brigades became the public service we know today. We'll look at how fire departments have evolved starting with the Roman Empire all the way through the formation of the first publicly funded fire brigades in London. By the end, I hope to convince you that we need an Internet emergency service and that we should take a shortcut to get there.

Publicly funded fire departments are critical to our society. We rely on them for fire prevention and fighting, and their influence has shaped our cities for centuries. It's time the software industry learned from history and created a public service of our own.
 recording release: yes license: youtube  

14. Using Dash to Create Interactive Web Apps for Non-Technical Audiences
Joseph Willi

Analytical web applications can serve as a powerful means for scientists and engineers to interact with data and identify trends in a concise and straightforward manner. Such tools can allow users to immediately see the effects of modifying specific input parameters. Additionally, interactive web apps can be utilized to present data visualizations and analysis results in engaging ways.

Unless you're a full-stack developer, creating these types of web applications may seem quite challenging. Dash, a Python framework written on top of Flask, Plotly.js, and React.js, handles many of the complexities associated with building custom interfaces and provides users the ability to build powerful data visualizations strictly through Python.

Despite being an intermediate Python user lacking full knowledge of the technologies and protocols required to build web-based applications, I was able to create a UI using Dash. More specifically, I built an interactive dashboard for firefighters to process and interact with sensor data collected during performance testing of their rescue equipment.

During this talk, I will briefly detail the motivation behind this project. Then, I'll describe how the project progressed to its current state, while highlighting key points that can be applied to the general case of developing interactive web apps for audiences from non-technical backgrounds. To conclude my presentation, I will show a demo of the interactive web app and summarize the key takeaways.

Have you ever struggled with finding ways to present data visualizations and/or results to non-technical audiences in a coherent and engaging manner? In this talk, I'll detail how I overcame such a challenge by using Dash to build an interactive app for firefighters to use during performance testing of their rescue equipment.
 recording release: yes license: youtube  

15. Explicit is Better than Implicit: Making Culture Visible with Team Charters
Christopher T. Miller

*Beautiful is better than ugly.* There is more to a team than just throwing people together and telling them to ship code. The culture of the team matters, Not company culture, but the culture and operating rules of the group of people that spend their days together. That culture can be amazing when it is mindfully considered. So how does that happen?

*Explicit is better than implicit.* A team charter is documentation written by a group of people to capture their purpose, their values, their working rules, and their general processes.  Making the invisible visible is the purpose of the team charter.

*There should be one-- and preferably only one --obvious way to do it.* Your team charter will provide an anchor to keep your team true to their ideals, even and **especially** during periods of great stress by providing a written record of how they aspire to work together.

*In the face of ambiguity, refuse the temptation to guess.* Whether it is establishing priorities or onboarding new team members, your team charter will take the guesswork out of the non-code parts of working together. 

In this talk, we'll explore what a team charter is, how to create one, and view examples of charters teams have created and used in their day-to-day work.

If you’ve ever joined a new team, you know that there are hidden rules for how the team operates: what they value in their day-today work, what is important to them. 

Breaking news: teams are hard. We document our code... shouldn't we document our team's values and ideals?
 recording release: yes license: youtube  

16. A Gentle Introduction to Linear Programming in Python
Bethany Poulin

## What is Linear Programming
Linear programming is an optimization method with broad utility. Sadly the study of linear programming is often overlooked in favor of sexier machine learning algorithms by both practitioners and data science educators  (both bootcamps and  graduate programs). However, it is used extensively in many of the fields from which data science evolved and there are useful libraries in most of the major analytical languages. We will spend 5-10 minutes understanding linear programming in general before moving on to code

## Linear Programming in Python
In Python, the most common library is Pulp which we will use to look at two separate optimization problems, one to minimize and one to maximize a desired outcome variable.
We will go through:
System requirements
Package Installation
Computational Complexity (floating values & discrete number calculations)


## Real World Examples with Code
We will be setting up and completing the two problems. In doing so we will learn how to simplify a complex system into a series of linear equations.

We will learn about:
 * Decision Variables - the outcomes we choose to optimize
 * Objective Function - the series of relationships between decision variables which affect the optimization
 * Constraints - the limits on our decision variables which allocate resources 
 * How to visualize the problem (it could be a graph or simple illustration)
 * How to  set up the equations defining the problem
* How to solve the series of equations
* How to interpret the results
* How to know when you can, should or should not use linear programming to solve a problem

The presentation will be assume a base understanding of python algebra and linear equations (y=mx+b) but it is not going to be presented in a mathematically demanding way. We will look at real-world resource allocation problems and use simple python code to solve them.

### Resources
There will be a fully fleshed out GITHUB repository with:
 * The slides
 * The code
 * Brief instructional README.md with links to other useful online sources

Linear programming is a useful computational technique for finding minima or maxima of a complex system by breaking it into a series of linear equations which describe the systems. It has many practical applications in industry, medicine, agriculture and in retail environments. Together we will explore simple linear programming problems using Python and the module Pulp.
 recording release: yes license: youtube  

17. Python Improvements (or This Is Not Your Teacher's Python)
Travis Risner

Topics that will be covered include:

-	f-strings
-	Typing
-	tuples 
-	secrets library
-	nanosecond timing 
-	hashing with sha3 and other techniques
-	dataclasses
-	Pathlib
-	underscores in numbers 

We will discuss not only how to use the new features but why.

This session covers improvements to the Python language with 3.6 an 3.7.  We will discuss aspects such as f-strings, formal typing, the various kinds of tuples, more precise timing, better hashing, etc.
 recording release: yes license: youtube  

18. Scraping Your Way to a Dataset
Alex Zharichenko

It is essential to have a very large and high-quality dataset in order to perform significant analytics or to use in various machine learning tasks. For some tasks, there exists simple APIs or repositories of data to collect from. But for many other tasks like tracking prices of products, predicting stock prices, and predicting outcomes of sports games there isn't a convenient way to retrieve this information besides a webpage. Because of these circumstances, learning to scrape data from webpages and other sources allows us to create our own dataset. Additionally, scraping grants us the ability to ask better questions about data in the world.

This talk is geared towards beginner-to-intermediate Python developers that want to be able to ask and answer better questions through data. This talk will provide a guide for web scraping through two examples, and it will explain how to get the scraped data into a usable form. Throughout the talk, I will highlight some tips for improving scraper performance, minimizing the risk that a web server will stop you, and different ways to store the collected data. The first of the two examples will examine a simple case of scraping data about the lottery and the second will explore a more challenging case of scraping course information from a University.

Large datasets are vital for the majority of analytic and machine learning tasks. But what happens when the data you need isn't available in some convenient and easily obtainable form? This talk will go through the process of data scraping to create a dataset that can be then used for various analytical or machine learning tasks.
 recording release: yes license: youtube  

19. A Hands-On Guide to Building Interactive Command-Line Apps with cmd2
Todd Leonhardt, Kevin Van Brunt

Interactive command-line applications (CLIs) are used extensively in many real-world scenarios, particularly in the DevOps and Security communities as well as for internal developer tooling and automation.  I'm sure many of you have used the wonderful [ipython](https://ipython.org) interactive Python shell which is a good example of a CLI.  Python has the built-in [cmd](https://docs.python.org/3/library/cmd.html) library for creating CLIs, but it is extremely bare-bones.  The [cmd2](https://github.com/python-cmd2/cmd2) package is a batteries-included extension of `cmd` which makes it much quicker and easier to create feature-rich and user-friendly CLIs.

The presentation will first explain how to install `cmd2`.  The talk will next show how to create a basic `cmd2` application.  Then the talk will progressively add features to this application while demonstrating the capabilities built into `cmd2`.  In the end, the presentation will show how to build a basic but feature-rich and user-friendly CLI application from scratch. This application will include many features which make it easy to use for customers, including:

* Built-in help
* Top-notch tab-completion
* Shell-like functionality including ability to run shell commands, pipe to shell commands, and redirect output to files
* Built-in application scripting
* Built-in Python scripting
* Built-in history
* Command aliases and macros

Ultimately, people who attend this talk will learn how to use the Python programming language with the `cmd2` package to quickly and efficiently build their own interactive command-line applications.

Interactive command-line (CLI) applications are extremely popular in the DevOps and Security communities as well as for internal tooling and automation.  Have you ever wanted to build an awesome CLI application using Python but don't know where to get started?   This talk will show you how to use the cmd2 package to quickly and easily build feature-rich and user-friendly CLI apps in Python.
 recording release: yes license: youtube  

20. Docker-Composing Your Way to a Better Development Environment
Ricardo Solano

By the end of this talk, audience members will understand the following concepts:

- Running application services/dependencies inside containers and its advantages and disadvantages. 
- Defining application environment and services via Dockerfile, docker-compose.yml configuration files.
- Managing the environment using the `docker-compose` CLI.

To illustrate these concepts, a Django application will be configured to use a database, a cache, a queue and task worker.

Development environments can become cumbersome over time, with setup occasionally filling multiple pages of documentation and making onboarding new team members a difficult task. Whether you deploy your Python application using containers or not, Docker Compose is a great tool for defining development environments that closely mirror production and can be spun up with a single command.
 recording release: yes license: youtube  

21. Django in Production with PEX
Alexandru Barbur

This talk discusses deploying and running Django web applications in production using Twitter PEX. PEX can be used to package a Python application and it's dependencies into a single file that can be easily copied to and run on other machines. The PEX format has some limitations and this talk will explore one possible way to use it for distributing Django web applications.

- Introduction (2m)
- Overview of Twitter PEX (3m)
- Django Management Commands (5m)
  - Gunicorn Web Server
  - Celery Task Worker
- Entry Point Script (5m)
- Creating the Distribution (5m)
- Deploying the Distribution (5m)
- Running the Application (5m)
- Q&A

This talk discusses deploying and running Django web applications in production using Twitter PEX.
 recording release: yes license: youtube  

22. The Magic of Python
Darshan Markandaiah

In this talk, I will introduce and enumerate over magic methods available in Python. This is an introductory talk for anyone with basic familiarity of Python. For each class of magic methods that I introduce, I'll provide example code samples.

I will start off by introducing basic magic methods that allow you to do things like initializing objects and printing readable versions of objects. I will then go over select magic methods that allow for emulating numeric types. I will then cover methods that enable you to emulate sequences and write objects that can be indexed and iterated over. I will conclude by talking about context managers (that allow for managing pre-step and post-step actions) and Abstract Base Classes in the abc module that will allow for you to get free functionality if you provide the implementation for certain magic methods on your classes.

Python has many built in magic functions that are used internally by classes for certain actions. For example, adding two numbers calls the `__add__` method and iterating over a list calls the  `__iter__`  method. I will expand on this Duck Typing principle and enumerate over a range of magic methods that you can add to your classes to have a cleaner codebase.
 recording release: yes license: youtube  

23. Hands-On Web UI Testing
Andrew Knight

Unit tests are a great way to start Web app testing and automation, but the buck doesn’t stop there. Black-box feature tests that interact with the app like a user are just as important. They catch things unit tests cannot. The challenge is that Web UI tests are complicated and notoriously unreliable. So, how can we write tests well?

In this tutorial, we’ll cover:

* Using Python 3, pytest, and Selenium WebDriver to write tests like a pro
* Modeling Web UI interactions with page objects
* Deciding what should and should not be tested with automation
* Improving the solution to scale higher

The tutorial will include lecture segments intertwined with hands-on coding exercises. We will write tests together for the DuckDuckGo website.

After this tutorial, you’ll be able to write battle-hardened Web UI tests for any Web app, including Django and Flask apps. You will also have a test automation project that can be the starting point for any Web UI tests.

Unit tests are great, but they don’t catch all bugs because they don’t test features like a user. Never fear! Let’s learn how to write robust, scalable Web UI tests using Python, pytest, and Selenium WebDriver that cover the full stack for any Web app.
 recording release: yes license: youtube  

24. If Statements are a Code Smell
Aly Sivji

Writing software is about making trade-offs between getting things done and doing them right. Time constraints often force us to take shortcuts to handle slight variations resulting in patches of conditional logic sprinkled throughout our codebase. Workarounds that once allowed us to move quickly now hinder our progress in getting new features out to customers. It doesn't have to be this way!

This talk will demonstrate how to use Object-Oriented programming patterns, specifically polymorphism, to handle conditional logic resulting in code that is easy to modify. The material will be presented in the context of a real-world code refactor for an open-source project. We will examine the initial solution, discuss its limitations, and walk through the process of refactoring nested `if` blocks into polymorphic classes.

The session is geared towards developers who do not have a lot of experience implementing Object-Oriented solutions. After this talk, you will be able to identify situations where Object-Oriented design can be used to simplify complex conditional logic. Using the steps outlined, you will be able to refactor code to improve software architecture without changing existing functionality.

`if` statements allow us to selectively execute code based on conditional logic. Overusing conditionals results in code that is hard to understand and difficult to modify. This talk will demonstrate how to refactor `if` statements into polymorphic classes, resulting in cleaner program design. After this session, you will be able to implement complex conditional logic using simple Python classes.
 recording release: yes license: youtube  

25. Building Docs like Code: Continuous Integration for Documentation
Mason Egger

It is common for developers to overlook the documentation of their works. They are either on a time crunch, lack the proper tooling, or simply just forget to create and update the documentation. Whatever the cause behind this, it is not a proper excuse for not keeping the documentation up to date. However, for all our development processes there are few as neglected as the documentation process. Documentation should be treated as important as the code that makes up the project. In this talk we'll take a look at current documentation processes and discuss moving the documentation into the code. With modern documentation tools such as MkDocs and Sphinx, both of which are Python powered tools, and Continuous Integration tools we can now include docs in the commit. They can be reviewed in code reviews, built and versioned in a CI tool, and even tested for things such as correct code examples and broken links. This is the process that the developer knows, understands, and enjoys. I introduced a team to this exact workflow and a working pipeline; all they had to do was keep the documentation up to date. This team currently has some of the most up to date documentation in a company of near two thousand engineers, and they never complain about writing/updating documentation. It’s just part of the workflow.

Project documentation is easy to neglect. Keep your docs inside your source repo and learn how to automatically build and publish beautiful docs on every commit. Viewers will leave with a new mindset on how to handle documentation, tooling for this process, and an easy-to-implement method to achieve this.
 recording release: yes license: youtube  

26. The Value of Docstrings
Eric Appelt

Python docstrings differ from regular comments in that they are stored
as an attribute of a callable object and are accessible through the
help() builtin function. However, their importance in writing
readable and maintainable python modules has very little to do with their
technical language features in Python, and much more to do with the
discipline they bring to effective documentation.

Any new developer will encounter well reasoned advice on the need to
comprehensively comment their code, and contrary but equally reasonable advice
to avoid using comments at all by writing better code. This talk will briefly
explore these viewpoints, and then review the standard conventions for Python
docstrings. I argue that the consistent and conventional use
of Python docstrings results in more readable and maintainable code than
that written with only unstructured comments, independent of how sparse or
plentiful those comments may be.

Additional areas of improvement in software design are discussed, including
effective encapsulation, the difficulty of naming methods, "docstring driven
development", extension into external documentation, and integration with testing.
Finally, I argue that Python
docstring conventions are a model for improved software design in general,
and are worth adopting even in other languages that do not necessarily
support actual docstrings.

Docstrings are a common convention in Python programming, but
their value may be taken for granted. In the absence of docstrings, schools of thought on writing effective code involve using many comments or using few if any comments. I will
argue that docstrings improve upon these approaches, and then explore how they can positively impact encapsulation, testing, documentation, and design.
 recording release: yes license: youtube  

27. The Blameless Post Mortem: How Embracing Failure Makes Us Better
Chris Wilcox

While developing software, bugs and mistakes are inevitable. Come to hear how we can improve the approaches we often take as software developers to work better with one another in heated moments of failure and the aftermath of incidents. Through better interactions we can build better teams and create better services.

In my career I have worked in a blameless post-mortem and a blame-full post mortem environment, across a variety of projects ranging from individual python libraries, to core infrastructure for a cloud. I am excited to share how I think not assigning blame when things go wrong results in a better team and a better product.

In today’s world of developing services we tend to move fast and with that comes mistakes. This talk will discuss using post-mortems to turn incidents into opportunities for improvement, instead of just an opportunity to assign blame.
 recording release: no  

28. Find Your Feature Fit: How to Pick a Text Editor for Python Programming
Gregory M. Kapfhammer, Madelyn M. Kapfhammer

This presentation will explore the different features of text editors for
Python programming. By comparing the capabilities of VS Code and Vim, audiences
of all skill levels will receive the necessary information to make an informed
decision about which text editor fits their programming preferences. Using the
illustrative example of a Python programmer who is implementing a Python
program, the talk will introduce and compare features including fuzzy file
finding and code navigation, auto-completion, source code highlighting,
linting, testing, virtual environments, and snippets. For VS Code and Vim,
these selected features showcase what is often important to a Python
programmer, highlighting the trade-offs and benefits of both text editors. Here
are some topics that we will cover in this presentation:

- **Fuzzy File Finding**: Rapidly search for files in your project with names
  that match a pattern.

- **Source Code Highlighting**: Bring clarity by applying colors and fonts to
  your source code and technical writing.

- **Autocompletion**: Save time by quickly substituting partial code and text
  segments with the desired content.

- **Linting and Code Formatting**: Check and reformat source code and writing to
  ensure adherence to well-established style guides.

- **Virtual Environments and Packages**: Maintain project isolation by
  installing and managing packages in separate development and execution
  environments.

- **Automated Testing and Debugging**: Establish a confidence in program
  correctness by running test suites and finding and fixing bugs.

- **Code Snippets**: Save time when programming and testing by inserting full
  segments based on easy-to-complete keywords.

Ultimately, this presentation will demonstrate that both VS Code and Vim are
outstanding text editors for Python, with features that can assist in many
everyday programming tasks. In different ways, and possibly with different
disadvantages or benefits, these text editors improve a programmer's efficiency
and effectiveness, becoming an indispensable part of an everyday workflow. With
the knowledge of the features that VS Code and Vim offer, the audience will be
able to choose which editor is best for them, emerging with the know-how to
configure it to their preferences for Python programming. Both beginners and
experts alike will be capable of finding their "feature fit" for a text editor
that supports Python programming!

What is important to you when it comes to text editors? To find out, join us in comparing VS Code and Vim. From version control integration to source code highlighting, with auto-completion, testing, virtual environments, snippets, code navigation and linting in between, learn how VS Code and Vim handle each feature and decide for yourself what fits your programming preferences when using Python.
 recording release: yes license: youtube  

29. # TODO: Add Comments: 5 Tips for _Winning_ at Code Comments
Nik Kantar

Documentation often gets paid lip service, and code comments almost always suffer the most. And yet they're often that last-moment savior during archeological expeditions into depths no longer known, unearthing obscure bugs or just trying to understand the foundation upon which to build something new.

Alas, we're all human, and thus oh so very fallible. And so we fall prey to habits which make the situation worse over time, usually little by little. We try to be heroes, but end up the very villains we bemoan.

"Okay, how can I do better?" you ask. In this talk we'll cover five simple things you can do to hack yourself into writing better comments.

This talk covers five unexpected pieces of advice for writing better code comments. From an editor change to some sound writing advice, it takes a brief journey into a few habits of successful commenters. Disclaimer: opinions ahead!
 recording release: yes license: youtube  

30. Using Python & R in Harmony
Matthew Brower, Krista Readout

How often do you hear the question "Python or R?"

Aspiring analytics professionals often feel the need to choose & learn a 'one size fits all' language for their scripting work.  There are many cases, though, where a specific library in Python or R is more effective than similar libraries in the other language.  This can lead to some painful tradeoffs when selecting a single language for your work.  Great news: recent developments have made leveraging both languages in the same workflow easier than ever before.

In this talk, we’ll present methods for leveraging R from directly within Python environments (and vice versa).  We will illustrate the use of these methods by using popular libraries to execute common analytics tasks across languages without switching development environments.

Python and R are two of the most popular languages used for data analysis.  They are often pitted against each other in pros and cons lists, where users feel forced to pick just one.  Each has unique advantages, and it's now easier than ever to use them harmoniously.  Python or R?  Why not both?
 recording release: yes license: youtube  

31. Your Own Personal Bootcamp: How to Efficiently Learn Your Next Technology
Joe Erickson

With the wealth of learning materials out there, why is it still not easy to pick up and learn new technologies? Why do we still have trouble going from learning to doing? With limited time to pick up the new things that will advance our careers, what's the most efficient way to retain and implement the new skills that we need? Taking lessons from adult learning theory and from examples of hundreds of bootcamp students, this talk will walk through what I have learned about accelerated adult learning and tell you the what and the why around techniques that you can use to more efficiently pick up your next technology in record time.

There has been substantial research done on how adults learn, but they aren't widely known. Using the same learning tactics you used as a kid doesn't always bring the best results or results that will stick. Adults shouldn't be aiming to memorize facts for a test, they should be looking to build long term skills that they can apply when needed.

This talk will pass on some of the most important and actionable findings in adult learning research and will walk attendees through a path to learning a new skill that is efficient and effective.

Topics include:

- The Dreyfus Model of Learning
- How learning shifts between the novice and the intermediate
- Understanding the levels of mastery
- The Effective Tech Bootcamp model of learning
- What learning techniques are effective and why
- Creating a learning plan that works for you

Taking lessons from adult learning theory and from examples of hundreds of bootcamp students, this talk will walk through what I have learned about accelerated adult learning and tell you the what and the why around techniques that you can use to more efficiently pick up your next technology in record time.
 recording release: yes license: youtube  

32. Sipping the Nectar of Amazon from the Serverless Chalice
Ilya Gotfryd

You have a small piece of functionality that doesn’t elegantly fit into various domains your existing application already covers. You’re of course concerned about delivering that functionality to production, and making it securely available to the end user. All of this is followed with an “if only I could” stream of thoughts, and cautious conversations with your Ops team that don’t go anywhere beyond hypotheticals. This is a perfect time to look into a serverless framework like Chalice. In this session, we will discover the flexibility, robustness, and ease of use inherent in serverless frameworks. We will dig deeper into ways to package production level code, including security, deployment, and load considerations. We will also touch on alternatives and general concerns for such architectural decisions.

It never seems to be the right time to enter the sweet world of microservices. Each time you use "serverless" in a conversation, it dies right there near the water-cooler. How do you produce a POC, tests, a build, and proper security if your teammates can’t come along? In this talk you will learn to: build, debug, validate, test, secure, and deploy with a build pipeline using a Python framework.
 recording release: yes license: youtube  

33. Saturday Lightning Talks
Dave Forgac

1. Secure Your PyPI Account! by Ernest W. Durbin III
2. Tim Has Too Many Projects -- Please Help by Tim 'mithro' Ansell
3. Announcing PyCarolinas 2020!
4. How I Wrote My Most Recent Tweet
5. Building an ORM using dataclasses by Jace Browning
6. Python in AWS Lambda by Peter Landoll
7. G New Cash - Balancing Your Checkbook w/ Python by Paul Bromwell, Jr.
8. Maintaining 100 PyPI Packages
9. Property Testing Pandas / Bulwurk
10. Venmo me @graduation by Josh Martin

5-minute talks on topics of interest to the PyOhio community.
 recording release: yes license: youtube  

34. Sunday Welcome
Dave Forgac

 

Welcome to PyOhio! Important information and a brief overview of the conference.
 recording release: yes license: youtube  

35. The Gig is Up: Radical Shifts That Save Cultures, Teams, and Companies
Greg Svoboda

The way we traditionally build, lead, and participate in development teams isn’t doing us any favors. In fact, it might be literally killing us. Greg will speak on how a radical paradigm shift can save not just our projects and teams, but our very passion to do the work itself.
 recording release: yes license: youtube  

36. Surviving Without Python
Andrew Knight

Python is not the only “fish in the sea” - there are several good languages and frameworks out there that are awesome in their own right. And as software people, whether we are web developers, data scientists, or some other role, we probably won’t spend 100% of our work using Python. It’s inevitable. Web dev relies on JavaScript. Data scientists often use R and Scala. Backends frequently use C# and Java. Success as a modern software engineer requires inter-domain proficiency.

Personally, even though I love Python, I don’t use it daily at my full time job. Nevertheless, Pythonic thinking guides my whole approach to software. I will talk about how the things that make Python great can be applied to non-Python places in three primary ways:

1. Principles from the Zen of Python
2. Projects that partially use Python
3. People who build strong, healthy community

I will provide stories, statistics, examples, projects, side-by-side code comparisons, and pictures to explain these points well. Python’s values can make the software world a better place!

Python is such a popular language for good reason: Its principles are strong. However, if Python is “the second-best language for everything”… that means the _first-best_ is often chosen instead. Oh no! How can Pythonistas survive a project or workplace without our favorite language? Take a deep breath, because I’ll show you how to apply things that make Python great to other software spaces.
 recording release: yes license: youtube  

37. Quickly Build Your Own Personal Website with Python
Vince Salvino

Quick overview and pro/cons of common web development platforms and what they offer:

* WordPress

* Static site generators

* Popular Python content management systems: plone, django-cms, and Wagtail

What is [CodeRed CMS](https://github.com/coderedcorp/coderedcms)?

* Open source pip package, based on Wagtail, Django, and Bootstrap CSS.

* Provides a nice interface and pre-built components to get you up and running quickly with no code.

* Similar level of editing and configurability as WordPress.

Live tutorial: we will use CodeRed CMS to build a personal blog.

* First we will install the pip package and get a basic site set up with zero coding required!

* Second we will use Python to write a little code for advanced customization of our new website.

Attendees will leave this talk with an understanding of the current state of python content management systems, and with knowledge on how to build their own personal website or blog.

Haven't gotten around to building that personal blog? How about a website for your side project? Python actually has a rich ecosystem of web development tools that are easy to learn and fun to use! Bring your laptop and follow along as we build a personal blog LIVE in this talk using the pip package: CodeRed CMS (based on Wagtail and Django).
 recording release: yes license: youtube  

38. Leave Your Inhibitions at the Database Connection
Regina Compton

## Abstract

Reconciling old assumptions with new approaches can be difficult. This reconciliation can be especially difficult, when those assumptions and approaches correspond with one’s emerging professional identity. A musicologist turned developer, a Rubyist turned Pythonist, I know well how intrapersonal tensions can shape (or hinder) approaches to writing code. This talk confronts these tensions by describing the technical and emotional dimensions of my less-than-easy journey from SQL to the Django ORM.

Django supports two basic approaches to interacting with a database: (1) running queries with the Django database-mapper (more commonly described as the “ORM”), and (2) performing raw SQL. My first Django projects display a strong preference for the latter approach. I came to Django with very limited coding experience. I started my job at a Python shop as a freshly minted grad of Dev Bootcamp, where I had acquired some knowledge of Ruby on Rails and its ORM, but also learned about the possibilities and easeful-ness of SQL. I eschewed the Django ORM, in part because of its seeming unknowableness, but mainly because SQL was a familiar face in an unfamiliar land. In Django, I wrote SQL for simple queries (selecting with a WHERE clause), moderately challenging ones (joining multiple tables + ordering with CASE expressions), and obscenely complex ones (subqueries + aggregate functions + string manipulations). Whatever case, I generally found my queries to be transparent, flexible, and friendly.  

It took over a year for me to appreciate that the Django ORM does clever and astonishing things. I eventually found joy in annotating querysets with derived values, and I stood in awe of the Prefetch object in elaborate prefetch operations. The ORM, I learned, could produce clean code and also bypass the performance loss that comes with transforming SQL results into more amenable data types (e.g., namedtuple). 

In this talk, I will share some lessons in Django. But also, I will suggest strategies for evaluating solid, familiar approaches and replacing them with alternative ones.

It is easy to cling to the familiar to avoid the unknown – even when unfamiliar approaches better serve your work. My talk explores this fact, specifically, by looking at the technical and emotional dimensions of my less-than-easy journey from writing raw SQL to using the Django ORM.
 recording release: yes license: youtube  

39. gRPC and What, Why, How?
John Roach

# gRPC and what, why, how?

In this talk, we will be covering the following topics:

- **What is gRPC?** We will be talking about serialization and what RPC is in general. We will quickly skim over the history of previous similar protocols for example: SOAP and CORBA. We will be talking about the problem space these protocols tried to resolve and why it slowly lost popularity.

    We will be giving a quick overview of RESTful services and what has been done so far to support RESTful services.

    We will look into the history of gRPC and how it came to be.

- **Why use gRPC?** With a segway from the first topic we will be looking into what gRPC is doing differently than the previous generation RPC solutions and pros/cons against using REST.

- **How can we use gRPC with Python?** Will showcase via live coding(or code samples for backup) the creation of a 'Hello World' application. We will write a simple proto file, generate code from it and start it as a service and query the service using an open source tool. We will also demonstrate how quick someone might create a client for the gRPC service.

- **Production Considerations:** We will go over the most important considerations when deciding to use gRPC in production. Such as build tooling, testing, deployment and load management considerations.

- **Q&A:** Time allowing will open the floor to questions that people might have

You might have overheard yet another acronym "gRPC" getting thrown when talking about a replacement of REST or when mentioning microservices. In this talk, we will be looking into what gRPC is, the reasons why you would use it, how you would use it with Python and talk about considerations for running gRPC services in production.
 recording release: yes license: youtube  

40. Probabilistic Programming and Bayesian Inference in Python
Lara Kattan

Let's build up our knowledge of probabilistic programming and Bayesian inference! All you need to start is basic knowledge of linear regression; familiarity with running a model of any type in Python is helpful. 

By the end of this presentation, you'll know the following: 
- What probabilistic programming is and why it's necessary for Bayesian inference
- What Bayesian inference is, how it's different from classical frequentist inference, and why it's becoming so relevant for applied data science in the real world 
- How to write your own Bayesian models in the Python library PyMC3, including metrics for judging how well the model is performing 
- How to go about learning more about the topic of Bayesian inference and how to bring it to your current data science job 

We'll meet our objectives by answering three questions: 

1. What is probabilistic programming?
    * PP is the idea that we can use computer code to build probability distributions 
    * Theory of the primitives in probabilistic programming and how we can build models out of distributions 

2. What is Bayesian inference and why should I add it to my toolbox on top of classical ML models?
    * Classically, we had simulations, but they run in only one direction: get data input and move it according to assumptions of parameters and get a prediction
    * Bayesian inference adds another direction: use the data to go back and pick one of many possible parameters as the most likely to have created the data (posterior distributions) 
    * Use Bayes' theorem to find the most likely values of the model parameters

3. What is PyMC3 and how can I start building and interpreting models using it? 
    * **We'll work through actual examples of models using PyMC3, including hierarchical models** 
    * Solving Bayes’ theorem in practice requires taking integrals 
    * If we don’t want to do integrals by hand, we need to use numerical solution methods
    * From the package authors: "[PyMC3 is an ]open source probabilistic programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed"

The intention is to get hands-on experience building PyMC3 models to demystify probabilistic programming / Bayesian inference for those more well versed in traditional ML, and, most importantly, to understand how these models can be relevant in our daily work as data scientists in business.

If you can write a model in sklearn, you can make the leap to Bayesian inference with PyMC3, a user-friendly intro to probabilistic programming (PP) in Python. PP just means building models where the building blocks are probability distributions! And we can use PP to do Bayesian inference easily. Bayesian inference allows us to solve problems that aren't otherwise tractable with classical methods.
 recording release: yes license: youtube  

41. The Mediocre Programmer
Craig Maloney

This talk is highlights from a book that I'm writing about the journey of being a mediocre programmer. Beginners tend to get all of the love, and advanced programmers get all of the respect and glory. But we don't have much for helping intermediate programmers. We don't tend to consider how difficult it can be to work through the exuberance of beginning programming (where everything is new, fresh, and exciting) into becoming better programmers. We're just expected to figure things out on our own. This talk draws on my experiences of being a mediocre programmer and gives advice and tips on how to become better programmers. We'll cover how to find a group of traveling companions, how to focus on learning one thing at a time, and how to deal with the struggles of our emotions and self doubt. We'll also cover examining our emotions and understanding when the spark that drew us to programming is truly burned out.

Mediocre Programmers? What is that? Shouldn't we want to be great programmers instead? In this talk we'll discuss what it means to be a mediocre programmer. We'll consider the many pitfalls that may befall you on your journey, from self-doubt to burnout, and share tips for how to cope with the challenges of programming and when it might be time to try something new.
 recording release: yes license: youtube  

42. What's the Buzz with Machine Learning
Allison Bolen

Pesticides, parasites, and poor nutrition, has led to the decline of honeybee colonies throughout North America. A number of methods have been proposed to combat the problem, with one here at Grand Valley State University (GVSU) focusing on collecting hive weight data identifying potential issues through data analytics. Currently, “citizen scientist” beekeepers participate by collecting weight data from their hives through the Bee Informed Partnership (BIP). Using Python3, Bokeh, SciKit learn and Pandas we were able to produce a model using linear regression that could predict patterns in weight data. Our short term goal for the project was to create a model that could predict events and windows of time where events could have occured to improve the data quality and user engagement. The ultimate long term goal of this project is to predict what kind of event occurred such as adding food to the hive, harvesting honey, swarming events, and even parasite infestation.

Honeybee colonies throughout North America have declined precipitously due to parasites, pesticides, and poor nutrition over the past two decades. Monitoring hive health autonomously assists beekeeper efforts.  We developed a model which automatically detects events in bee hive weight data assisting data collection efforts improving data quality for future machine learning models to be developed.
 recording release: yes license: youtube  

43. The Riddle of the Intersphinx: Configuration and Cross-Reference Composition
Brian Skinn

[Sphinx](http://www.sphinx-doc.org) is a documentation generator used by the [core Python documentation](https://docs.python.org/3/library/index.html) and numerous other packages such as [SciPy](https://docs.scipy.org/doc/scipy/reference/), [Django](https://docs.djangoproject.com/en/), and [Blender](https://docs.blender.org/api/current/). Sphinx supports cross-references across project boundaries via its ['intersphinx' extension](http://www.sphinx-doc.org/en/stable/ext/intersphinx.html#module-sphinx.ext.intersphinx), which uses data from an objects inventory file generated by Sphinx when building HTML docs.  However, configuration of the intersphinx mappings to external documentation and correct composition of the cross-references to specific external objects can both be challenging to achieve, as the necessary reference syntax can vary in a non-obvious way. Related messages/warnings issued during the Sphinx build process, if enabled, are useful for identifying that a problem exists, but are typically of minimal help in fixing the broken references. The [:any: role](http://www.sphinx-doc.org/en/stable/markup/inline.html#role-any) is convenient for some cases, but is unhelpful when a given object name is ambiguous (e.g., with the Python [max() builtin](https://docs.python.org/3/library/functions.html#max) versus [numpy.ndarray.max](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.max.html)).

In this talk, I will describe a (mostly) systematic approach to intersphinx configuration and usage, including locating the inventory for an external docset; decoding and parsing the inventory to obtain the information needed for a functional intersphinx reference; and constructing the cross-reference from this information. As I hope to demonstrate, using intersphinx is quite easy, as long as you know where to look for key information, and what to do with it.

Sphinx is a documentation generator used by the core Python documentation and numerous other projects in the Python ecosystem. Sphinx supports cross-references between documentation sets via its ‘intersphinx’ extension; however, proper configuration is not always straightforward, and cross-references can be finicky to craft correctly. This talk aims to demystify these riddles of intersphinx usage.
 recording release: yes license: youtube  

44. I Lost 25 Pounds Thanks to Python: Personal Data Analytics Using Pandas and Numpy
Jack Bennett

Your smartwatch and smartphone provide reams of data about your body, movement, behavior, health, and more. Python is an ideal language to use for analyzing, transforming, and displaying this data. Furthermore, numerous third-party packages such as NumPy, SciPy, pandas, and matplotlib make this process easier, faster, more fun, and more insightful than ever before.

Furthermore, you can use these tools to get tangible results in your life: for example, during the first few months of 2019, I used a set of Python scripts operating on a combination of personal data sources to modify my habits and behaviors to lose 25 pounds!

In this talk we analyze several streams of data from Apple Watch and iPhone to explore what we can learn from them, individually and in combination. Data categories that we explore include:

* sleep
* fasting
* heart rate
* body weight

We use simple but powerful techniques from signal processing, including moving averages and filtering, to extract insight from the data. Additionally, we investigate correlations between the different data streams.

Putting this methodology in place is fun, informative, and personally rewarding. In particular, you can use it for habit tracking, to increase self-knowledge and motivate useful habit change.

Python provides a great set of built-in tools and third-party libraries for data analysis. Modern personal devices like smart watches or phones generate streams of data about body metrics, location, movement, and more. I describe Python-based methods for extracting and analyzing data from personal smart devices. I applied these methods to track and change habits and behaviors to lose 25 pounds.
 recording release: yes license: youtube  

45. Automated Discovery of Cancer Types from Genes
Shruthi Ravichandran

While many other diseases are relatively predictable and treatable, cancer is very diverse and unpredictable, making diagnosis, treatment, and control extremely difficult. Traditional methods try to treat cancer based on the organ of origin in the body, such as breast or brain cancer, but this type of classification is often inadequate. If we are able to identify cancers based on their gene expressions, there is hope to find better medicines and treatment methods. However, gene expression data is so vast that humans cannot detect such patterns. In this project, the approach is to apply unsupervised deep learning to automatically identify cancer subtypes. In addition, we seek to organize patients based on their gene expression similarities, in order to make the recognition of similar patients easier.

While traditional clustering algorithms use nearest neighbor methods and linear mappings, we use a recently developed technique called Variational Autoencoding (VAE) that can automatically find clinically meaningful patterns and therefore find clusters that have medicinal significance. Python-based deep learning framework, Keras, offers an elegant way of defining such a VAE model, training, and applying it. In this work, the data of 11,000 patients across 32 different cancer types was retrieved from The Cancer Genome Atlas. A VAE was used to compress 5000 dimensions into 100 clinically meaningful dimensions. Then, the data was reduced to two dimensions for visualization using tSNE (t-distributed stochastic neighbor embedding). Finally, an interactive Javascript scatter plot was created. We noticed that the VAE representation correctly clustered existing types, identified new subtypes, and pointed to similarities across cancer types. This interactive plot of patient data also allows the study of nearest patients, and when a classification task was created to validate the accuracy of the representation, it achieved 98% accuracy. The hope is that this tool will allow doctors to quickly identify specific subtypes of cancer found using gene expression and allow for further study into treatments provided to other patients who had similar gene expressions.

Cancer treatment often focuses on organ of origin, but different types can occur in one organ. Gene expression provides valuable clues of the cancer type, but studying data manually is difficult. Instead, we use variational autoencoding, a deep learning method, to derive 36-dimensional feature space from 5000-dimensional gene space and show its efficacy in classification and a TSNE visualization.
 recording release: yes license: youtube  

46. Learn How Computers Work Between Silicon and Assembly: Build a CPU with Python
Zak Kohler

Programming languages are designed for a specific level of abstraction or distance from the hardware. The main trade off is "developer productivity" vs. "control over hardware". C and assembly are low level and therefore map closely to CPU instructions. Python on the other hand goes through many layers, libraries, and a virtual machine before the CPU is reached. This allows powerful programs to be written concisely and cross-platform—but it also conceals the true nature at the heart of our modern world. Unveiling the magic within can lead to interesting insights about how computing got to where it is today.

Specs for nerds: 8-bit words, 256 memory addresses, Von Neumann w/ shared address+data bus, DMA with numpy based buffer.

We will build a CPU focused on transparency, interactivity, and modularity. Our CPU has a configurable architecture and machine language. Yes, you can invent your own assembly instructors to add functionality. We will cover registers, data/address busses, memory‌ (ROM/RAM), IO, and assemblers.
 recording release: yes license: youtube  

47. A Practical Introduction to Integer Linear Programming
Igor Ferst

How do airlines choose which planes service which routes? How does a hospital optimize the shift schedule for hundreds of doctors and nurses? How do you choose the optimal location for a group of fulfillment centers, or oil derricks, or cell towers? These kinds of problems (and many others!) can be solved with integer linear programming (ILP), a powerful and decades-old framework for solving optimization problems. In this talk we will give a brief introduction to ILP and describe it's uses, strengths, and weaknesses. We will also show how to solve a real-world vehicle routing problem using Google's open-source python library for ILP. Trigger warning: this talk will contain high-school level math.

Integer linear programming (ILP) is a powerful framework for solving optimization problems related to scheduling, resource allocation, vehicle routing, and many other areas. This talk will give a brief introduction to ILP and show how to solve a real-world vehicle routing problem using Google's open-source python library for ILP.
 recording release: yes license: youtube  

48. Keeping Fun in Computing
Dustin Ingram

We'll also talk about some modern examples of how folks are ensuring technology remains not-so-serious, including  some examples specific to the Python community, and how some famous thinkers followed their natural curiosity to keep science fun, all to great success. 
 
And finally, we'll discuss how you and I can keep computing fun on a day-to-day basis, maintain and nurture our natural curiosity, and just be open to the unknown, all to the benefit of our field, those we work with, and ourselves.

In this talk, we'll explore how maintaining a sense of fun and whimsy in science has a profound effect on discovery, innovation and progress.
 recording release: yes license: youtube  

49. Search Logs + Machine Learning = Auto-Tagged Inventory
John Berryman

For e-commerce applications, matching users with the items they want is the name of the game. If they can't find what they want then how can they buy anything?! Typically this functionality is provided through search and browse experience. Search allows users to type in text and match against the text of the items in the inventory. Browse allows users to select filters and slice-and-dice the inventory down to the subset they are interested in. But with the shift toward mobile devices, no one wants to type anymore - thus browse is becoming dominant in the e-commerce experience.

But there's a problem! What if your inventory is not categorized? Perhaps your inventory is user generated or generated by external providers who don't tag and categorize the inventory. No categories and no tags means no browse experience and missed sales. You could hire an army of taxonomists and curators to tag items - but training and curation will be expensive. You can demand that your providers tag their items and adhere to your taxonomy - but providers will buck this new requirement unless they see obvious and immediate benefit. Worse, providers might use tags to game the system - artificially placing themselves in the wrong category to drive more sales. Worst of all, creating the right taxonomy is hard. You have to structure a taxonomy to realistically represent how your customers think about the inventory.

Eventbrite is investigating a tantalizing alternative: using a combination of customer interactions and machine learning to automatically tag and categorize our inventory. As customers interact with our platform - as they search for events and click on and purchase events that interest them - we implicitly gather information about how our users think about our inventory. Search text effectively acts like a tag and a click on an event card is a vote for that clicked event is representative of that tag. We are able to use this stream of information as training data for a machine learning classification model; and as we receive new inventory, we can automatically tag it with the text that customers will likely use when searching for it. This makes it possible to better understand our inventory, our supply and demand, and most importantly this allows us to build the browse experience that customers demand.

In this talk I will explain in depth the problem space and Eventbrite's approach in solving the problem. I will describe how we gathered training data from our search and click logs, and how we built and refined the model. I will present the output of the model and discuss both the positive results of our work as well as the work left to be done. Those attending this talk will leave with some new ideas to take back to their own business.

Eventbrite is exploring a new machine learning approach that allows us to harvest data from customer search logs and automatically tag events based upon their content. The results have allowed us to provide users with a better inventory browsing experience.
 recording release: yes license: youtube  

50. Making Games with ppb
Piper Thunstrom

Come in knowing nothing about games, and we'll get you out the door with one or two small games under your belt! You will need to know some basic python. You should be comfortable writing a class and instantiating classes. You should know how to write and call your own functions. We'll cover all the complex bits of games and get you going quickly.

1. Introduction
    1. Piper Thunstrom
    2. PPB
    3. Get installed
2. Game 1 - A simple "shooter"
    1. Make a window
    2. Adding your Player
    3. Hooking up controls
    4. Making projectiles
    5. Adding targets
3. Game 2 - Virtual Pet
    1. Create your pet
    2. Add hunger
    3. Add boredom and playing.
    4. Add a filth level and washing your pet.

Learn to make games with Python. ppb aims to make building games simple and fun! We'll cover the basic features of ppb to make a basic game, then dive into the powerful extensibility features it offers to make even more complex games.
 recording release: yes license: youtube  

51. Gathering Insights from Audio Data
Ryan Bales

We’ll go over the different types of audio formats and how format and type of audio plays a role in the quality of the outcome. We’ll go over different transcription options available today and provide a demo of converting audio data into text. We’ll review ways of storing and searching text data at scale using open source tools and Natural Language Processing (NLP) techniques. Going further we’ll explore different techniques for building machine learning models on the transcribed text data. You’ll leave this session with a firm understanding of how to take audio data and convert it into actionable insights.

Data comes in many shapes and sizes. In this session, we’ll look into the process of converting audio files into valuable data.
 recording release: yes license: youtube  

52. Deep Learning Like a Viking: Building Convolutional Neural Networks with Keras
Guy Royse

The Vikings came from the land of ice and snow, from the midnight sun, where the hot springs flow. In addition to longships and bad attitudes, they had a system of writing that we, in modern times, have dubbed the Younger Futhark (or ᚠᚢᚦᚬᚱᚴ if you're a Viking). These sigils are more commonly called runes and have been mimicked in fantasy literature and role-playing games for decades.

Of course, having an alphabet, runic or otherwise, solves lots of problems. But, it also introduces others. The Vikings had the same problem we do today. How were they to get their automated software systems to recognize the hand-carved input of a typical boatman? Of course, they were never able to solve this problem and were instead forced into a life of burning and pillaging. Today, we have deep learning and neural networks and can, fortunately, avoid such a fate.

In this session, we are going to build a Convolution Neural Network to recognize hand-written runes from the Younger Futhark. We'll be using Keras to write easy to understand Python code that creates and trains the neural network to do this. We'll wire this up to a web application using Flask and some client-side JavaScript so you can write some runes yourself and see if it recognizes them.

When we're done, you'll understand how Convolution Neural Networks work, how to build your own using Python and Keras, and how to make it a part of an application using Flask. Maybe you'll even try seeing what it thinks of the Bluetooth logo?

In this session, we are going to build a Convolution Neural Network to recognize hand-written runes from the Younger Futhark. We'll be using Keras to write easy to understand Python code that creates and trains the neural network to do this. We'll wire this up to a web application using Flask and some client-side JavaScript so you can write some runes yourself and see if it recognizes them.
 recording release: yes license: youtube  

53. Let's Build an ORM
Greg Back

The presentation will start with some background on the requirements and relational databases. Next, we’ll build a basic ORM that allows creating simple tables and inserting, querying, and (if we have time) deleting records. Finally, the talk will cover some of the challenges of building a production-grade ORM, including caching, transactions, supporting multiple dialects, and we’ll briefly discuss security implications of ORMs, including SQL injection. Attendees will leave with a greater appreciation for the inner workings of the ORMs that are used on a daily basis, while understanding the challenges that go into building one.

Applications rely on data, and relational databases are a convenient way to organize structured information. Object-relational mappers like SQLAlchemy and Django’s ORM are complex libraries, but they aren’t black magic. De-mystify some of the magic as we build the basics of an ORM in under an hour.
 recording release: yes license: youtube  

54. Dynamic Data Pipelining with Luigi
Trey Hakanson

As the scale of modern data has grown, so too has the need for modern tooling to handle its growing list of needs. Databases have had to become more horizontally scalable, less centralized, and more fault tolerant to handle the expectations of modern users. As such, the concept of data-warehouses and data-engineering are relatively new concepts, and engineers are still hard at work to solve core problems of this new sector. One problem of particular interest is that of dynamic data pipelining and workflows. Ingesting large amounts of data, transforming streams dynamically into a standardized format, and maintaining checkpoints and dependencies in order to ensure that proper prerequisites are met before beginning a given task are all difficult problems. This talk will describe how these problems can be solved using Luigi, Spotify’s robust tool for constructing complex data pipelines and workflows.

Luigi allows for complex pipelines to be described programmatically, handling multiple dependencies and dependents. This allows it to be used for a wide variety of batch jobs, and the option to use the centralized scheduler makes it easy to monitor job progress across data warehouses. In addition, Luigi’s robust checkpoint system allows for pipelines to resumed at any point they may fail at. Each task is well-defined, specifying required inputs and resulting outputs, so creating or editing pipelines is a breeze.

As the scale of modern data has grown, so has the need for tooling to handle its growing list of challenges. Whether performing reporting, bulk ingestion, or ETL processes, it is important to maintain flexibility and ensure proper monitoring. Luigi provides a robust toolkit to perform a wide variety of data pipelining tasks, and can be easily integrated into existing workflows with ease.
 recording release: yes license: youtube  

55. A/V Streaming Workflow in Python
Shishir Pokharel

If you are into the live stream, podcast or planning to stream live you might come across different platforms like Periscope, YouTube Live, Facebook Live, etc.,  where you can log in and start your live streaming. However, when it comes to manipulating your streams, these platforms don’t have that functionality exposed to end users. On top of this, these platforms are one to one streaming platforms; there is not an easy way to broadcast your stream directly to multiple platforms. In this talk, we will talk about how we can create an application to capture gameplay/screen content, manipulate your stream, record the captured content locally, and stream it on different streaming platforms concurrently.

A/V Streaming has been used widely but being able to manipulate the stream on your custom requirements has always been challenging. We discuss how we can overcome with this challenge and build custom streaming pipeline with the help of open source multimedia framework and Python libraries for Gst, Gib, and GObject.
 recording release: yes license: youtube  

56. Is This Your Card? Computer Vision for Playing Card Recognition
Steve Crow

"Pick a card, any card," the magician prompts you fanning out a deck of cards. You select a card, note its value, and hand it back to the magician. They do some sleight of hand, make the card disappear into the deck, and then make it reappear. You confirm that it is, indeed, your original card. The magician moves on and you get to go back to enjoying your dinner.

Where is the real magic? Is it in the magician's ability to make a card reappear? Or, is it something that many of us take for granted each and every day? In the very instant you glance at a card, you're able to take in details without even thinking about it.

Computer Vision aims to teach computers to interact with the visual world. It has applications in navigation, automated inspection, medical image process, and so much more.

In this talk I will do the following:

- Introduce the field of Computer Vision.
- Demonstrate how to manipulate a webcam video feed and pre-process the video to perform Canny Edge Detection.
- Use these edges to isolate a playing card image and, eventually, identify which playing card is being shown.

No prior knowledge of Computer Vision or Machine Learning is necessary.

Computer Vision aims to teach computers to interact with the visual world. It has applications navigation, automated inspection, assisting the visually impaired, and so much more. In this talk, I will explain and demonstrate how you can use Computer Vision to locate and identify a playing card in a live video feed.
 recording release: yes license: youtube  

57. Refactor Yourself
Esther B. Gotfryd, MSN, NP-C

We all have been there (or witnessed it in someone else): glassy-eyed, glaring into many open screens, fingers rushing through multiple lines of code, energy drink of some sort by your side, and possibly snacks that were dug out of the pantry (which should have been disposed of long ago). You are now propelled into a sleepless night — and hopefully, victory by the time the sun comes up. As you sit there trying to solve yet another complex problem, you become keenly aware of the fuzzy sensation in your brain, the fatigue in your body, the somewhat weird noises in your stomach, and your general lack of capacity to process what it is you came here to do. In this talk, we will walk through concepts of sleep hygiene, eating habits, digestive concerns, body aches, strains, and pains, mood concerns, and how they all affect your ability to function. We will discuss how to examine your recent behavior, prioritize any symptoms, what to tackle first, and how to persevere in the long run.

Do you sometimes feel like a pile of legacy code? Do you dread refactoring yourself into a "new hotness,” due to the insurmountable amount of work it may take? Do you want to break free, but are unsure of where to start? Look no further, as this session will embark on a journey to refactor you, one red-green test at a time, starting with the highest priority issues first.
 recording release: yes license: youtube  

58. Sunday Lightning Talks
Dave Forgac

1. Accessible Livetweeting by Kat Passen
2. Pygmalion by Shelby Elzinga
3. BYOAPI: Selenium to the rescue by Nik Kantar
4. Busy Beaver by Aly Sivji
5. Ministry of Silly Runtimes: Vintage Python by Dustin Ingram

5-minute talks on topics of interest to the PyOhio community.
 recording release: yes license: youtube  



Location
--------
Cartoon 1


About the group
---------------
===
https://pyohio.org

A FREE annual conference for anyone interested in Python in and around Ohio, the entire Midwest, maybe even the whole world.