How to Use the for my eyes only2 Object Detection API (2024)

Roboflow Inference Inference is Roboflow's open source deployment package for developer-friendly vision inference.

How to Deploy the for my eyes only2 Detection API

Using Roboflow, you can deploy your object detection model to a range of environments, including:

  • Luxonis OAK
  • Raspberry Pi
  • NVIDIA Jetson
  • A Docker container
  • A web page
  • iOS
  • A Python script using the Roboflow SDK.

Below, we have instructions on how to use our deployment options.

Code Snippets

Hosted API

On Device

Native SDKs

Cloud

Utilities

Python

cURL

Javascript

Swift

.NET

## Infer on Local and Hosted Images To install dependencies, `pip install inference-sdk`. Then, add the following code snippet to a Python script: ```python from inference_sdk import InferenceHTTPClient CLIENT = InferenceHTTPClient( api_url="https://detect.roboflow.com", api_key="API_KEY" ) result = CLIENT.infer(your_image.jpg, model_id="MODEL_ENDPOINT/VERSION") ``` [See the inference-sdk docs](https://inference.roboflow.com/inference_helpers/inference_sdk/)

## Linux or MacOS Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg: ``` base64 YOUR_IMAGE.jpg | curl -d @- \ "https://detect.roboflow.com/MODEL_ENDPOINT/VERSION?api_key=API_KEY" ``` Inferring on an image hosted elsewhere on the web via its URL (don't forget to [URL encode it](https://www.urlencoder.org/): ``` curl -X POST "https://detect.roboflow.com/MODEL_ENDPOINT/VERSION?\ api_key=API_KEY&\ image=ENCODED_IMAGE_URL" ``` ## Windows You will need to install [curl for Windows](https://curl.se/windows/) and [GNU's base64 tool for Windows](http://gnuwin32.sourceforge.net/packages/coreutils.htm). The easiest way to do this is to use the [git for Windows installer](https://git-scm.com/downloads) which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation. Then you can use the same commands as above.

## Node.js We're using [axios](https://github.com/axios/axios) to perform the POST request in this example so first run npm install axios to install the dependency. ### Inferring on a Local Image ``` const axios = require("axios"); const fs = require("fs"); const image = fs.readFileSync("YOUR_IMAGE.jpg", { encoding: "base64" }); axios({ method: "POST", url: "https://detect.roboflow.com/MODEL_ENDPOINT/VERSION", params: { api_key: "API_KEY" }, data: image, headers: { "Content-Type": "application/x-www-form-urlencoded" } }) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error.message); }); ``` ### Inferring on an Image Hosted Elsewhere via URL ``` const axios = require("axios"); axios({ method: "POST", url: "https://detect.roboflow.com/MODEL_ENDPOINT/VERSION", params: { api_key: "API_KEY", image: "IMAGE_URL" } }) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error.message); }); ``` ## Front-End Web ### Inferring on a Local Image in Browser We have realtime on-device inference available via roboflow.js; [see the documentation here.](https://docs.roboflow.com/inference/web-browser). This will load your model to run realtime inference directly in your users' web-browser using WebGL instead of passing images to the server-side. ### Inferring on a Local Image via API Note: you shouldn't expose your Roboflow API key in the front-end to users outside of your organization. This snippet should either use your users' API key (for example, if you're building model assisted labeling into your own labeling tool) or be put behind authentication so it's only usable by users who already have access to your Roboflow workspace. ``` import axios from 'axios'; const loadImageBase64 = (file) => { return new Promise((resolve, reject) => { const reader = new FileReader(); reader.readAsDataURL(file); reader.onload = () => resolve(reader.result); reader.onerror = (error) => reject(error); }); } const image = await loadImageBase64(fileData); axios({ method: "POST", url: "https://detect.roboflow.com/MODEL_ENDPOINT/VERSION", params: { api_key: "API_KEY" }, data: image, headers: { "Content-Type": "application/x-www-form-urlencoded" } }) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error.message); }); ```

## Uploading a Local Image Using base64 ``` import UIKit // Load Image and Convert to Base64 let image = UIImage(named: "your-image-path") // path to image to upload ex: image.jpg let imageData = image?.jpegData(compressionQuality: 1) let fileContent = imageData?.base64EncodedString() let postData = fileContent!.data(using: .utf8) // Initialize Inference Server Request with API KEY, Model, and Model Version var request = URLRequest(url: URL(string: "https://detect.roboflow.com/MODEL_ENDPOINT/VERSION?api_key=API_KEY&name=YOUR_IMAGE.jpg")!,timeoutInterval: Double.infinity) request.addValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type") request.httpMethod = "POST" request.httpBody = postData // Execute Post Request URLSession.shared.dataTask(with: request, completionHandler: { data, response, error in // Parse Response to String guard let data = data else { print(String(describing: error)) return } // Convert Response String to Dictionary do { let dict = try JSONSerialization.jsonObject(with: data, options: []) as? [String: Any] } catch { print(error.localizedDescription) } // Print String Response print(String(data: data, encoding: .utf8)!) }).resume() ```

## Uploading a Local Image ```csharp using System; using System.IO; using System.Net; using System.Text; namespace UploadLocal { class UploadLocal { static void Main(string[] args) { byte[] imageArray = System.IO.File.ReadAllBytes(@"YOUR_IMAGE.jpg"); string encoded = Convert.ToBase64String(imageArray); byte[] data = Encoding.ASCII.GetBytes(encoded); string api_key = "API_KEY"; // Your API Key string DATASET_NAME = "MODEL_ENDPOINT"; // Set Dataset Name (Found in Dataset URL) // Construct the URL string uploadURL = "https://api.roboflow.com/dataset/" + DATASET_NAME + "/upload" + "?api_key=" + api_key + "&name=YOUR_IMAGE.jpg" + "&split=train"; // Service Request Config ServicePointManager.Expect100Continue = true; ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12; // Configure Request WebRequest request = WebRequest.Create(uploadURL); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = data.Length; // Write Data using (Stream stream = request.GetRequestStream()) { stream.Write(data, 0, data.Length); } // Get Response string responseContent = null; using (WebResponse response = request.GetResponse()) { using (Stream stream = response.GetResponseStream()) { using (StreamReader sr99 = new StreamReader(stream)) { responseContent = sr99.ReadToEnd(); } } } Console.WriteLine(responseContent); } } } ``` ## Inferring a Local Image ```csharp using System; using System.IO; using System.Net; using System.Text; namespace InferenceLocal { class InferenceLocal { static void Main(string[] args) { byte[] imageArray = System.IO.File.ReadAllBytes(@"YOUR_IMAGE.jpg"); string encoded = Convert.ToBase64String(imageArray); byte[] data = Encoding.ASCII.GetBytes(encoded); string api_key = "API_KEY"; // Your API Key string model_endpoint = "MODEL_ENDPOINT/VERSION"; // Set model endpoint // Construct the URL string uploadURL = "https://detect.roboflow.com/" + model_endpoint + "?api_key=" + API_KEY + "&name=YOUR_IMAGE.jpg"; // Service Request Config ServicePointManager.Expect100Continue = true; ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12; // Configure Request WebRequest request = WebRequest.Create(uploadURL); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = data.Length; // Write Data using (Stream stream = request.GetRequestStream()) { stream.Write(data, 0, data.Length); } // Get Response string responseContent = null; using (WebResponse response = request.GetResponse()) { using (Stream stream = response.GetResponseStream()) { using (StreamReader sr99 = new StreamReader(stream)) { responseContent = sr99.ReadToEnd(); } } } Console.WriteLine(responseContent); } } } ``` ## Uploading an Image Hosted Elsewhere via URL ```csharp using System; using System.IO; using System.Net; using System.Web; namespace InferenceHosted { class InferenceHosted { static void Main(string[] args) { string api_key = ""; // Your API Key string imageURL = "https://i.ibb.co/jzr27x0/YOUR-IMAGE.jpg"; string model_endpoint = "dataset/v"; // Set model endpoint // Construct the URL string uploadURL = "https://detect.roboflow.com/" + model_endpoint + "?api_key=" + api_key + "&image=" + HttpUtility.UrlEncode(imageURL); // Service Point Config ServicePointManager.Expect100Continue = true; ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12; // Configure Http Request WebRequest request = WebRequest.Create(uploadURL); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = 0; // Get Response string responseContent = null; using (WebResponse response = request.GetResponse()) { using (Stream stream = response.GetResponseStream()) { using (StreamReader sr99 = new StreamReader(stream)) { responseContent = sr99.ReadToEnd(); } } } Console.WriteLine(responseContent); } } } ```

More Deployment Resources

Roboflow Documentation Look through our full documentation for more information and resources on how to utilize this model. Example Web App Use this model with a full fledged web application that has all sample code included.
Video Inference Script

Our example script performs inference on a video file with Roboflow Infer.

Deploy to NVIDIA Jetson Perform inference at the edge with a Jetson via our Docker container. Deploy to Luxonis OAK Perform inference at the edge with an OAK device via our Docker container. Deploy Mobile iOS Utilize your model on your mobile device.
How to Use the for my eyes only2 Object Detection API (2024)

FAQs

How to start with object detection? ›

The steps you'll need to follow to create your first object detection model are:
  1. Decide on what you want to detect;
  2. Collect data for your project;
  3. Label data with bounding boxes or polygons;
  4. Train an object detection model using a model like Ultralytics YOLOv8, and finally;
  5. Test the model.
Aug 22, 2023

Is the TensorFlow object detection API deprecated? ›

The Tensorflow object detection API is deprecated, and refers to Tensorflow-vision.

How to make an object detection model in Python? ›

The following examples explain and show how to prepare data for processing for each of the available data types:
  1. # Load the input image from an image file. mp_image = mp. ...
  2. # Use OpenCV's VideoCapture to load the input video. ...
  3. # Use OpenCV's VideoCapture to start capturing from the webcam.
May 21, 2024

How do you train your own object detection? ›

There are six steps to training an object detection model:
  1. Choose an object detection model architecture. ...
  2. Load the dataset. ...
  3. Train the TensorFlow model with the training data. ...
  4. Evaluate the model with the test data. ...
  5. Export as a TensorFlow Lite model. ...
  6. Evaluate the TensorFlow Lite model.

Why is object detection hard? ›

Object detection is customarily considered to be much harder than image classification, particularly because of these five challenges: dual priorities, speed, multiple scales, limited data, and class imbalance.

How to install TensorFlow 2 object detection API? ›

Installation of the Object Detection API is achieved by installing the object_detection package. This is done by running the following commands from within Tensorflow\models\research : # From within TensorFlow/models/research/ cp object_detection/packages/tf2/setup.py . python -m pip install .

What is the best Python framework for object detection? ›

ImageAI is a user-friendly Python library that simplifies object detection tasks. It provides a comprehensive set of computer vision algorithms and deep learning methodologies for image recognition, object detection, video analysis, and more.

Which is better OpenCV or TensorFlow for object detection? ›

Verdict: OpenCV vs. TensorFlow. In conclusion, OpenCV excels in traditional computer vision applications, offering robust image and video processing tools with strong community backing. On the other hand, TensorFlow specializes in deep learning, providing extensive support for building and training neural networks.

What is the fastest object detection algorithm? ›

1. YOLO (You Only Look Once) YOLO is a popular one-stage object detection model known for its speed and accuracy. It processes images in real-time, making it suitable for applications requiring quick detection.

Which dataset is best for object detection? ›

Object Detection Datasets
  • Synthetic Fruit Dataset. ...
  • Drone Gesture Control Dataset. ...
  • Raccoon Dataset. ...
  • Chess Pieces Dataset. ...
  • Mountain Dew Commercial Dataset. ...
  • Packages Dataset. Object Detection (Bounding Box) ...
  • Pothole Dataset. Object Detection (Bounding Box) ...
  • 6 Sided Dice Dataset. Object Detection (Bounding Box)

What is the best programming language for object detection? ›

C/C++/C# C, C++ and C# programming dialects of the C-family are used widely for the creation of artificial intelligence programs. Their native libraries and specifications such as EmguCV, OpenGL and OpenCV have built-in intelligent features for processing pictures and can be utilized for quick development of AI apps.

Is TensorFlow good for object detection? ›

The TensorFlow Object Detection API is an open-source framework built on top of TensorFlow that makes it easy to construct, train and deploy object detection models.

How do I get started with object detection? ›

In the case of object detection problems, we have to classify the objects in the image and also locate where these objects are present in the image. But the image classification problem had only one task where we had to classify the objects in the image.

How to train an AI model for object detection? ›

Train an AutoML model
  1. In the Google Cloud console, in the Vertex AI section, go to the Datasets page. ...
  2. Click the name of the dataset you want to use to train your model to open its details page.
  3. Click Train new model.
  4. For the training method, select radio_button_checkedAutoML.

How do I start image recognition? ›

How image recognition works in four steps.
  1. Step 1: Extraction of pixel features of an image.
  2. Step 2: Preparation of labeled images to train the model.
  3. Step 3: Training the model to recognize images.
  4. Step 4: Recognition of new images.
Sep 21, 2022

What is the basic idea of object detection? ›

CNN object detection refers to the use of Convolutional Neural Networks for detecting and localizing objects in images or videos. CNNs are trained on large datasets of labeled images to learn features and patterns associated with different objects.

How to do real-time object detection? ›

This is typically solved using algorithms that combine object detection and tracking techniques to accurately detect and track objects in real-time. They use a combination of feature extraction, object proposal generation, and classification to detect and localize objects of interest.

How do you initiate an object? ›

When you create an object, you are creating an instance of a class, therefore "instantiating" a class. The new operator requires a single, postfix argument: a call to a constructor. The name of the constructor provides the name of the class to instantiate. The constructor initializes the new object.

Top Articles
Latest Posts
Article information

Author: Francesca Jacobs Ret

Last Updated:

Views: 5539

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Francesca Jacobs Ret

Birthday: 1996-12-09

Address: Apt. 141 1406 Mitch Summit, New Teganshire, UT 82655-0699

Phone: +2296092334654

Job: Technology Architect

Hobby: Snowboarding, Scouting, Foreign language learning, Dowsing, Baton twirling, Sculpting, Cabaret

Introduction: My name is Francesca Jacobs Ret, I am a innocent, super, beautiful, charming, lucky, gentle, clever person who loves writing and wants to share my knowledge and understanding with you.