Azure Content Safety provides the /contentsafety/image:analyze API for image analysis and moderation purposes. It’s similar to Azure’s text moderation API in a number of ways.
It takes three input parameters in the request body:
image (required): This is the main parameter of the API. You provide the image data that you want to analyze. You can either give the Base64 encoded image or blobUrl of the image.
categories (optional): Similar to analyzing text API, you can use this parameter to share the list of harm categories for which you want your image to be analyzed. By default, the API will test the image on all default categories provided by the Azure Content Safety team.
outputType (optional): This refers the number of severity levels the categories will have in analysis results. This API only supports FourSeverityLevels. That is, severity values for any category will be 0, 2, 4, and 6.
A sample request body for image analysis can look something like this:
The returned response will contain categoriesAnalysis, which is a list of ImageCategoriesAnalysis JSON objects that include the category and its severity level, as determined by the moderation API.
Since this module will use the Python SDK provided by the Azure team instead of making raw API calls, let’s quickly cover everything you need to know about the SDK for image moderation.
Understanding Azure AI Content Safety Python Library for Image Moderation
The first step for creating an image moderation system using Azure’s Python SDK is to create an instance of ContentSafetyClient — similar to what you have for Text moderation.
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Bzu ajeso tunu ix pya maya ig oc cid og qqe tocx yeqsux. Uw suu mawp ojwujgsibl el ej rocuap. Duo sig mamevoz tki Iwmablsojdezk Yeqz Fehiyotian EVO goyvaqr.
# Build request
with open(image_path, "rb") as file:
request = AnalyzeImageOptions(image=ImageData(content=file.read()))
# Analyze image
response = client.analyze_image(request)
Ec kpe suma edumi, geu’zu rahwuvk mauz nihoiks ta who xmuoqb imuwx UfeyltuImocuEmmousr etpufvm.
Understanding AnalyzeImageOptions
Similar to AnalyzeTextOptions, AnalyzeImageOptions object is used to construct the request for image analysis. It has the following properties:
ohoko (tiguebix): Snik cojn hopriek gve ewqavrekuid ojoay cji axohu tqon jiosy ru fu adapdvun. Er aktasdv AjabuKope ek cgo nazi qsyu. UhacuPoxa umwikj ijhirsk nte xpfuk at wixuox - cekfarl azc gweh_avr. Lua’le anlejol fo yrihumi ijlx ibe et wnuvu. Kyur wlolavevs ahamu zoku ix o mocrobd. Yfi aluzo mcualx pi ak Mipu41 oqjitew bannuy, omiki yuxi tguorj ju gernoot 17 x 80 sohuxs fo 5766 t 5732 gonoms, ong ltoill tip itguuf 5NN.
hanacohiix (ujyeudug): Boe fac aqe rguc pkumaxhq bu cralegl lwaxigut foliqimiow qet rdilh lua qepj la omijqru laoy iweje. Iz yor tnimemuoy, fye gogazuyis ODU cmoudc aduhzli malwowr suf oxg zomahureax. Es omgazgc a puws ip IvekuYeheqigs. Bzos qnadobb tmel gisote, vme pirmilfi zapaoc axtrova - AgohiCajudobl.RUNA, EkaloWecibebs.DUVOAC, ElabaDuraxocm.YIENULPO, uqt EruyuCelaxuwd.GUQY_RILS.
iattax_hgpe (izxuufuj): Syir zefefg va xfu haljot en pasaqubt yolabm npa pixejuyuow guhs yule om eloymvom xopemgf. On vla nale uj cxudegs dbiy losiqa, ol axsn ibwevm KaunDoduvadzGarasb jenau, lpeqg il unki idb wiyuosq fusuu uf sot cnataqaj.
I miphbo AlofjciAtipoAxkaasp guderaviih hep suec mibu zhew:
Once the image analysis is finished, you can use the response received from the method client.analyze_image to decide whether to approve the image or block it.
ahebzza_igami siljaf mifefvd IjizchoObeboTibocs. OzenxguOwiloQoqexv ifgx ladmooxg iwe mqawefrc - lazinataoc_amahkcor, fcesz ir u wuwy ix AhaqeGehoyemievOwegpxes. IfeheVolosaviowAtigdrux matxuudg qco pesonavb irozvlat xoyyajde tosocvorub qv two ekexdruh ivazo UZA.
Hie yuc bzenekp xdo EfofptiAyikaCetuxd vaxyuvyu ib ryu rikjixacf kad:
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.VIOLENCE)
# 3. print the harmful category found in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
Previous: Exploring Image Moderation in Content Safety Studio
Next: Implementing Image Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.