4 takeaways on improving your object detection model’s performance with image augmentation

One of the amazing things about computer vision is using existing images plus random changes to increase your effective sample size. Suppose you have one photo containing a coffee mug. Then, copy that photo and rotate it 10 degrees clockwise. From your point of view, you haven’t done very much.

But you’ve (really easily) doubled the number of images you’re about to give your model! Your computer vision model now gets a whole new perspective on what that coffee mug looks like.

Now creating two versions of the same existing photo of a coffee mug isn’t quite as good as…


Technology and team activities motivate our distributed team.

We’d like to think we’ve worked remotely pretty successfully: we raised a seed round, we’ve come in first place on the Pioneer leaderboard for a record-breaking 25-weeks in a row, we publish at least five pieces of content each week (check out our blog and YouTube channel!), and we’ve grown our team to 7 full-time hires. (We’re currently seeking to hire a few more full-time hires!)

We’ve learned a lot! As companies and employees grapple with the choice of how (and even whether!) …


Zero-Shot Content Moderation with OpenAI’s New CLIP Model. (Source.)

When creating a platform on which people can create and share content, there’s often a question of content moderation. Content moderation can mean a whole host of different things, but by and large it describes rules or approaches that platforms use to classify content as acceptable or unacceptable.


Co-authored by Matt Brems and Shanaz Chowdhery.

“When we get back to normal.”

“After things open back up.”

Frequent refrains like this litter social media — and likely your conversations. Employers and some employees crave the opportunity to return to the traditional office and resume “life as usual.” Putting aside the potential lingering effects of COVID19 on our economy, labor market, society, and health, we wanted to take a moment to consider what a return to the office (RTO) actually looks like.

Even if we return to the office, it’s almost certain that some workers will want or need to…


Explaining OpenAI’s CLIP Model: What it is, how it works, and why it’s cool.

ELI5 (Explain Like I’m 5) CLIP: Beginner’s Guide to OpenAI’s CLIP Model. (Source.)

You may have heard about OpenAI’s CLIP model. If you looked it up, you read that CLIP stands for “Contrastive Language-Image Pre-training.” That doesn’t immediately make much sense to me, so I read the paper where they develop the CLIP model — and the corresponding blog post.

I’m here to break CLIP down for you in a — hopefully — accessible and fun read! In this post, I’ll cover:

  • what CLIP is,
  • how CLIP works, and
  • why CLIP is cool.

What is CLIP?

CLIP is the first multimodal (in this case, vision and text) model tackling computer vision and was recently released by…


Building an app for blood cell count detection

The app in action. (Image by Roboflow and Streamlit; authorized to use.)

Most technology is designed to make your life, or your work, easier. If your work involves building computer vision into your applications, using the Roboflow platform gives you everything you need.

Streamlit is an open-source platform that enables you to convert your Python scripts to apps and deploy them instantly. Streamlit and Roboflow can work hand-in-hand, allowing you to tackle computer vision problems and visualizing your output so you can make better decisions faster.

In this post, we’ll walk you through using Roboflow and Streamlit together by showing you how to:

  1. Fit an object detection model in Roboflow
  2. Use an…


As compared to the Intel-based 13" Macbook Pro

Unboxing my new MacBook! (Image by author.)

A few weeks ago, Apple released its first custom-designed silicon chip for the Mac, the M1. There have been several impressive benchmarks around its performance relative to its Intel-based predecessors, but we were interested in putting it through its paces on a machine learning (and, specifically, a computer vision) workload.

While Apple announced support for TensorFlow training on the M1, the toolchain isn’t quite ready yet. …


And why unbalanced classes are a problem.

(Image by Roboflow; authorized to use.)

Suppose you’re trying to teach an alien — like one of the crewmates from the wildly popular game Among Us — to tell the difference between a human and a dog.

What you might do is show the alien 100 pictures of a human and 100 pictures of a dog. Hopefully, the alien picks up on patterns of humans (taller, stands on two legs, opposable thumbs) and of what makes a dog (shorter, stands on four legs, paws with no opposable thumbs, tail, usually much cuter).

If you’ve done your job, the alien should be able to look at a…


And how to get your kernel to stop dying

(Image by Roboflow.)

You’ve probably heard a lot about the MacBook that contains the new Apple M1 chip. Quick summary: It’s fast. Like, really fast. You, a data scientist or related tech professional, may have bought one.

Disclaimer: We’ll attempt to keep this updated as best we can. These instructions are up to date as of November 30, 2020.


Image of a black pug looking up at the camera while sitting on a sidewalk.
Image of a black pug looking up at the camera while sitting on a sidewalk.
Picture of my dog, Paddington, with bounding boxes showing leaves are definitely not treats. (Image by Roboflow.)

Computer vision has the potential to revolutionize the world. So far, computer vision has helped humans work toward solving lots of problems, like reducing traffic gridlock and monitoring environmental health.

Matt Brems

Data Scientist | Operator | Educator | Consultant

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store