Yes, the title of that 2012 article in Harvard Business Review may have stretched things a bit. If you’re a regular reader of this blog, you know that any scientific, tech, or engineering endeavor has its long stretches of dullness and drudgery. You also know that if you can make it past those stretches, the work’s pretty rewarding.
If you’d been meaning to get into data science, today’s your day. For the entire day of Monday, April 27, 2020 and only until the stroke of midnight that marks the start of Tuesday, April 28, 2020, Manning’s solo liveProject courses are selling for $10 instead of the usual $50 or $60.
Manning liveProjects are learn-by-doing exercises. They start with a challenge that isn’t all that different from one you might encounter on the job, and the project is about addressing that challenge.The project is broken into several milestones where you can check your progress against a tested reference implementation. Along the way, you’ll have access to book and video resources selected for your project, as well as opportunities to collaborate with other participants. You do it at your own pace, and if you’d like extra help, there’s a (pricier) version with a mentor.
Here are the liveProjects on sale:
Discovering Disease Outbreaks from News Headlines
Imagine this: You are a data scientist at the WHO trying to get a handle on a virus outbreak. Your task? Use machine learning techniques to analyze news headlines gathered from around the globe for clues about its spread. What do you do?
Work and learn with over 1000 other participants in this liveProject. In Discovering Disease Outbreaks from News Headlines, you’ll analyze a database of headlines gathered clusters on a map to find patterns indicating an epidemic. As you work through this liveProject, you’ll develop techniques for text extraction, data manipulation, clustering, interpreting algorithm outputs, and producing an actionable report.
Decoding Data Science Job Postings to Improve Your Resume
Imagine this: You step into the life of a budding data scientist looking for their first job in the industry. There are thousands of potential roles being advertised online, but only a few that are a good match to your skill set. What do you do?
In Decoding Data Science Job Postings to Improve Your Resume, you’ll learn how to use libraries in the Python data ecosystem to analyze text-based data, such as resumes and job listings. As you build this project, you’ll clean data from HTML files, use text similarity analysis to find the perfect job, and visualize your results using word clouds and plots. When you finish, you’ll be ready to apply your new skills to any text analysis task.
Human Pose Estimation with Deep Neural Networks
Imagine this: You are a machine learning engineer working for a company developing augmented reality apps, including apps like fitness coaches that need to be able to reliably recognize the shape of a human body. Your challenge is to create an application for human pose estimation: detecting a human body in an image and estimating its key points such as knees and elbows. What do you do?
In Human Pose Estimation with Deep Neural Networks you’ll build a convolutional neural network from scratch, training your model using Google Colab and your GPU. At the end of this liveProject, you’ll have completed an interactive demo application that uses a webcam to detect and predict human keypoints!
Training Models on Imbalanced Text Data:
Imagine this: You are a data scientist working for an online movie streaming service. Your bosses want a machine learning model that can analyze written customer reviews of your movies, but you discover that the data is biased towards negative reviews. Training a model on this imbalanced data would hurt its accuracy, and so your challenge is to create a balanced dataset for your model to learn from. What do you do?
In Training Models on Imbalanced Text Data, your challenge is to create a balanced dataset for your model to learn from. You’ll start by simulating your company’s data by deliberately introducing imbalance to an IMDb (Internet Movie Database) review dataset, experimenting with two different methods for balancing this dataset. You’ll build and train a simple machine learning model on each dataset to compare the effectiveness of each approach.
Use Machine Learning to Detect Phishing Websites:
Imagine this: You’re a data scientist employed by the cybersecurity manager of a large organization. Recently, your colleagues have received multiple fake emails containing phishing attacks, one of the most common—and most effective—online security threats. Your manager is worried that passwords or other information will be given to an attacker. What do you do?
In Use Machine Learning to Detect Phishing Websites, you’ll build a machine learning model that can detect whether a linked website is a phishing site. As you go, you’ll sort out what’s safe and what’s a security risk, use common Python libraries, clean and query datasets, learn performing hyperparameter tuning, and summarize the performance of your models.
Building Domain Specific Language Models
Imagine this: You’re a NLP data scientist working for Stack Exchange. Your boss wants you to create language models that are tuned to the particular vocabulary of different Stack Exchange sites. Language is domain specific, so an insurance company’s documents will use very different terminology than a post on a social media site. Because of this, off-the-shelf NLP models trained on generic text can be inaccurate for specialized domains. What do you do?
In Building Domain Specific Language Models you’ll build a language model capable of query completion, text generation, and sentence selection for the domain-specific language of the Cross Validated statistics and machine learning site. Challenges you’ll face include preparing your datasets, building and evaluating n-gram word-based language models, and building a character-based language model with AllenNLP. At the end, you’ll have built a foundation for any domain specific NLP system by creating specialized, robust and efficient language models!
Training and Deploying an ML Model as a Microservice
Imagine this: You’re a developer for an ecommerce company. Customers provide reviews of your company’s products, which are used to give a product rating. Until now, assigning a rating has been manual. Your boss has decided that this is too expensive and time consuming. Your mission is to automate this process. What do you do?
In Training and Deploying an ML Model as a Microservice you will have to train a machine learning model to recognize and rank positive and negative reviews, expose this model to an API so your website and partner sites can benefit from automatic ratings, and build a small webpage using FaaS, containers, and microservices that can run your model for demonstration. You’ll learn how all parts of machine learning tie together, and how to effectively deploy a model to production.
Monitoring Changes in Surface Water Using Satellite Image Data
Imagine this: You’re a data scientist at UNESCO. Your job involves assessing long-term changes to freshwater deposits. Recently, two satellites have given you a massive amount of new data in the form of satellite imagery. Your task is to build a deep learning algorithm that can process this data and automatically detect water pixels in the imagery of a region. What do you do?
With Monitoring Changes in Surface Water Using Satellite Image Data, you will design, implement, and evaluate a convolutional neural network model for image pixel classification, or image segmentation. Your challenges will include compiling your data, training your model, evaluating its performance, and providing a summary of your findings to your superiors. Throughout, you’ll use the Google Collaboratory coding environment to access free GPU computer resources and speed up your training times!
3D Medical Image Analysis with PyTorch
Imagine this: You’re a machine learning engineer at a healthcare imaging company, processing and analyzing MR brain images. Your current medical image analysis pipelines are set up to use two types, but a new set of customer data has only one of those types! What do you do?
In 3D Medical Image Analysis with PyTorch your challenge is to build a convolutional neural network that can perform an image translation to provide you with your missing data. Utilizing the powerful PyTorch deep learning framework, you’ll learn techniques for computer vision that are easily transferable outside of medical imaging, such as depth estimation in natural images for self-driving cars, removing rain from natural images, and working with 3D data.
I ordered them all, paying $90 in the process. I’ll write about my experiences as I do each of these courses.
If you’re interested, go visit the promo page for these discounted liveProjects and place your order before midnight!