Maix-III AXera-Pi 试试 Python 编程
时间 | 负责人 | 更新内容 |
---|---|---|
2022.12.02 | lyx | 初次编写文档 |
2022.12.15 | lyx | 增加内容 |
2023.01.04 | lyx | 增加人脸/车牌识别、Yolov6等新模型 |
2023.01.29 | lyx | 补充细节说明 |
Python 是一种广泛使用的解释型、高级和通用的编程语言。它支持多种编程范型包括函数式、指令式、反射式、结构化和面向对象编程,还拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。
Python 与 C++ 相比有什么区别?
从上文得知 Python 是一种解释型语言,用户不需要编译以扩展名为
.py
的代码可直接传递给解释器生成输出。而 C++ 是编译型语言,编译器需要把源代码生成目标代码再执行生成输出。对于初学者来说 Python 更易于学习并且语法简单、可读性更强。而 C++ 在系统编程及性能上更优胜,但语法复杂编写起来对初学者有一定的挑战难度。
Python 基础及入门学习
在使用 Jupyter Notebook 进行 Python 编程之前,同学们需要掌握一定的 Python 语言的基础才能接着往下走,可根据下列提供的传送门进行学习。
以下的文章适合有一定 Python 基础想深入的同学们:
使用前准备一台 AXera-Pi 设备接入电脑端通电,参考系统登录连接到 Linux 系统上,登录后使用 ifconfig
在终端查询设备 IP 地址,再输入 jupyter notebook
命令启动它,终端会返回一系列服务器的信息。
注意:使用 Jupyter Notebook 时终端需保持连接状态,否则会与本地服务器的连接断开而无法操作。
打开任意浏览器输入刚查询到的 IP 地址并在后缀加上 :8888
即可直接访问网页(注意:lo:127.0.0.1 此 IP 地址不可用),网页会提醒需要输入密码 root
后访问。
输入后会跳转到 Files
的页面,点击右侧的 New
可选择符合需求的编辑环境。
Python3:默认的python3 kernel
Text File:新建一个文本文件
Folder:新建一个文件夹
Terminal:在浏览器中新建一个用户终端,类似于 shell/adb 终端.
运行代码
本篇文章所有的示例代码都是摄像头 GC4653 为例,如有 OS04A10 型号请前往 Maix-III 系列 AXera-Pi 常见问题(FAQ)修改。
用户选择 Python3
环境即可进入编辑页面,在网页上运行 Python 代码有以下三种示例供大家参考,代码运行后编辑框下会打印输出结果参数,用户则可以从板卡设备屏幕观察到运行实时效果。
- 使用
! + cmd
运行内置的脚本或命令行,或是直接在框内编辑Python
代码并点击运行,这里以运行NPU
应用为例。
!ls home/images
air.jpg carvana02.jpg face5.jpg o2_resize.jpg ssd_car.jpg
aoa-2.jpeg carvana03.jpg grace_hopper.jpg pineapple.jpg ssd_dog.jpg
aoa.jpeg carvana04.jpg mobileface01.jpg pose-1.jpeg ssd_horse.jpg
bike.jpg cat.jpg mobileface02.jpg pose-2.jpeg
bike2.jpg cityscape.png mtcnn_face4.jpg pose-3.jpeg
cable.jpg dog.jpg mtcnn_face6.jpg pose.jpg
carvana01.jpg efficientdet.png mv2seg.png selfie.jpg
!/home/ax-samples/build/install/bin/ax_yolov5s -m /home/models/yolov5s.joint -i /home/images/cat.jpg -r 10
--------------------------------------
model file : /home/models/yolov5s.joint
image file : /home/images/cat.jpg
img_h, img_w : 640 640
[AX_SYS_LOG] AX_SYS_Log2ConsoleThread_Start
Run-Joint Runtime version: 0.5.10
--------------------------------------
[INFO]: Virtual npu mode is 1_1
Tools version: d696ee2f
run over: output len 3
--------------------------------------
Create handle took 487.99 ms (neu 22.29 ms, axe 0.00 ms, overhead 465.70 ms)
--------------------------------------
Repeat 10 times, avg time 22.57 ms, max_time 22.88 ms, min_time 22.46 ms
--------------------------------------
detection num: 1
15: 89%, [ 167, 28, 356, 353], cat
[AX_SYS_LOG] Waiting thread(2867848448) to exit
[AX_SYS_LOG] AX_Log2ConsoleRoutine terminated!!!
exit[AX_SYS_LOG] join thread(2867848448) ret:0
from IPython.display import Image
Image("yolov5s_out.jpg")
- 也可以使用
%run
跑模块文件或.py
文件,这里以运行hello.py
为例。
%run hello.py
hello world!
或者是直接从网页端导入文件,点击右侧的 Upload
直接在任意目录下导入需要的文件即可。
- 如何导出我们在网页端编写的文件,以下面为例:
在网页端编写的的内容都可以以文档的形似输出,默认保存的是以后缀名为 .ipynb
的 json
格式,保存不同格式请点击 File
->Download as
->选择你需要的格式即可
,网页会自动下载到本地。
ax-pipeline-api
ax-pipeline-api:此项目基于 ax-pipeline 适配实现 Python API 编程,用户可使用 Python 调用内置的多种 AI 模型以及通用的 Python 库 pinpong、opencv、numpy、pillow 等,让 AXera-Pi 用起来更简单!
使用 ax-pipeline-api
之前先做好以下准备工作:
- 安装获取
ax-pipeline-api
包
因 ax-pipeline-api
更新升级较频繁,用户在使用 Python 编程前可先在终端使用下方命令行更新包,确保自己使用是最新版本的即可。
!pip3 install ax-pipeline-api -U
Requirement already satisfied: ax-pipeline-api in /usr/local/lib/python3.9/dist-packages (1.0.7)
Collecting ax-pipeline-api
Using cached ax-pipeline-api-1.0.7.tar.gz (15.5 MB)
Using cached ax-pipeline-api-1.0.6.tar.gz (19.5 MB)
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
x, y, w, h = i['bbox']['x'], i['bbox']['y'], i['bbox']['w'], i['bbox']['h']
objname, objprob = i['objname'], i['prob']
print(objname, objprob, x, y, w, h)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.free()
pipeline.free()
b'toilet' 0.4541160762310028 0.602770209312439 0.9111631512641907 0.16810722649097443 0.08513855934143066
b'toilet' 0.6902503967285156 0.606963574886322 0.9117961525917053 0.16024480760097504 0.08727789670228958
b'toilet' 0.6852353811264038 0.6020327210426331 0.9118891358375549 0.16942621767520905 0.08718493580818176
b'toilet' 0.7014157176017761 0.6041151881217957 0.9120386242866516 0.16582755744457245 0.0863698348402977
b'cup' 0.46080872416496277 0.6049922108650208 0.9143685698509216 0.1643451750278473 0.08425315469503403
从上面跑的 yolov5s 模型示例来看,运行时编辑框下会输出检测结果的相关参数,而实际画面可在板卡屏幕查看。我们也可以从上面的代码中替换不同功能的 .so
库或不同效果的 AI
模型来实现更多的 AI 应用。
更换
.so
库以及AI
模型请参考以下示例进行修改,这篇文档只挑选部分经典模型作为示例,大家可以举一反三进行更换功能库以及模型,更多详细信息前往 ax-pipeline-api 查看。全文示例使用GC4653
摄像头不同型号请前往 Maix-III 系列 AXera-Pi 常见问题(FAQ)修改。
- 以下是内置的 libxxx*.so 库总览:
可通过替换不同的 libxxx*.so
来体验不同的功能。
libsample_h264_ivps_joint_vo_sipy.so # input h264 video to ivps joint output screen vo
libsample_v4l2_user_ivps_joint_vo_sipy.so # input v4l2 /dev/videoX to ivps joint output screen vo
libsample_rtsp_ivps_joint_rtsp_vo_sipy.so # input video from rtsp to ivps joint output rtsp and screen vo
libsample_vin_ivps_joint_vo_sipy.so # input mipi sensor to ivps joint output screen vo
libsample_vin_ivps_joint_venc_rtsp_sipy.so # input mipi sensor to ivps joint output rtsp
libsample_vin_ivps_joint_venc_rtsp_vo_sipy.so # input mipi sensor to ivps joint output rtsp and screen vo
libsample_vin_ivps_joint_vo_h265_sipy.so # input mipi sensor to ivps joint output screen vo and save h265 video file
更换 libxxx*.so
可参考以下示例:
pipeline.load([
'libsample_vin_ivps_joint_venc_rtsp_vo_sipy.so',
'-p', '/home/config/yolov5s.json',
'-c', '2',
])
- 以下是内置的多种 AI 模型总览:
AI 模型被内置在 /home/config
的目录下,可通过更换模型来实现不同的 AI 应用。
ax_bvc_det.json hrnet_pose_yolov8.json yolov5s_face_recognition.json
ax_person_det.json license_plate_recognition.json yolov5s_license_plate.json
ax_pose.json nanodet.json yolov6.json
ax_pose_yolov5s.json palm_hand_detection.json yolov7.json
ax_pose_yolov8.json pp_human_seg.json yolov7_face.json
crowdcount.json scrfd.json yolov7_palm_hand.json
hand_pose.json yolo_fastbody.json yolov8.json
hand_pose_yolov7_palm.json yolopv2.json yolov8_seg.json
hrnet_animal_pose.json yolov5_seg.json yolox.json
hrnet_pose.json yolov5s.json
hrnet_pose_ax_det.json yolov5s_face.json
更换 AI 模型可参考以下示例:
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s_face.json',
'-c', '2',
])
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s_face.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.free()
pipeline.free()
{'label': 0, 'prob': 0.44722938537597656, 'objname': b'face', 'bbox': {'x': 0.32209691405296326, 'y': 0.5537495017051697, 'w': 0.08040990680456161, 'h': 0.1871424913406372}, 'bHasBoxVertices': 0, 'nLandmark': 5, 'landmark': [{'x': 0.34694865345954895, 'y': 0.6272169351577759}, {'x': 0.3749236762523651, 'y': 0.6124960780143738}, {'x': 0.3601255416870117, 'y': 0.6418469548225403}, {'x': 0.35046160221099854, 'y': 0.6791355609893799}, {'x': 0.37618887424468994, 'y': 0.6769239902496338}]}
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5_seg.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.free()
pipeline.free()
{'label': 33, 'prob': 0.5606005191802979, 'objname': b'kite', 'bbox': {'x': 0.6935881972312927, 'y': 0.2124193012714386, 'w': 0.04874410480260849, 'h': 0.07958509773015976}, 'bHasBoxVertices': 0, 'bHasLandmark': 0, 'bHasMask': 1, 'mYolov5Mask': {'w': 9, 'h': 8, 'data': b'\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\xff\xff\xff\x00\x00\x00\x00\x00\x00\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'}}
{'label': 33, 'prob': 0.6723322868347168, 'objname': b'kite', 'bbox': {'x': 0.6944284439086914, 'y': 0.21624213457107544, 'w': 0.05162983015179634, 'h': 0.08279800415039062}, 'bHasBoxVertices': 0, 'bHasLandmark': 0, 'bHasMask': 1, 'mYolov5Mask': {'w': 9, 'h': 8, 'data': b'\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\xff\xff\xff\x00\x00\x00\x00\x00\x00\xff\xff\x00\x00\x00\x00\x00\x00\x00\xff\xff\x00\x00\x00\x00\x00\x00\x00\xff\x00\x00\x00\x00\x00\x00\x00\x00'}}
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/pp_human_seg.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.free()
pipeline.free()
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/ax_pose.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.free()
pipeline.free()
{'label': 0, 'prob': 0.41659796237945557, 'objname': b'person', 'bbox': {'x': 0.01200273260474205, 'y': 0.0, 'w': 0.9315435290336609, 'h': 0.9421796798706055}, 'bHasBoxVertices': 0, 'bHasLandmark': 17, 'landmark': [{'x': 0.6708333492279053, 'y': 0.23333333432674408}, {'x': 0.6427083611488342, 'y': 0.16851851344108582}, {'x': 0.6520833373069763, 'y': 0.14629629254341125}, {'x': 0.7322916388511658, 'y': 0.5055555701255798}, {'x': 0.7614583373069763, 'y': 0.06481481343507767}, {'x': 0.7541666626930237, 'y': 0.09444444626569748}, {'x': 0.7541666626930237, 'y': 0.1518518477678299}, {'x': 0.7124999761581421, 'y': 0.15925925970077515}, {'x': 0.5041666626930237, 'y': 0.08703703433275223}, {'x': 0.6739583611488342, 'y': 0.07407407462596893}, {'x': 0.690625011920929, 'y': 0.6814814805984497}, {'x': 0.7833333611488342, 'y': 0.25}, {'x': 0.7614583373069763, 'y': 0.25}, {'x': 0.35104167461395264, 'y': 0.6074073910713196}, {'x': 0.3489583432674408, 'y': 0.5777778029441833}, {'x': 0.0572916679084301, 'y': 0.5185185074806213}, {'x': 0.0677083358168602, 'y': 0.5185185074806213}]}
import time
from ax import pipeline
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/hand_pose.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
# if tmp['nObjSize'] > 10: # try exit
# pipeline.free()
pipeline.free()
{'label': 0, 'prob': 0.948456346988678, 'objname': b'hand', 'bbox': {'x': 0.26589435338974, 'y': 0.26926565170288086, 'w': 0.46994149684906006, 'h': 0.4706382751464844}, 'bHasBoxVertices': 1, 'bbox_vertices': [{'x': 1.4048067331314087, 'y': -0.42393070459365845}, {'x': 1.2827746868133545, 'y': 1.74061918258667}, {'x': 0.06521528959274292, 'y': 1.5236728191375732}, {'x': 0.18724757432937622, 'y': -0.6408770084381104}], 'bHasLandmark': 21, 'landmark': [{'x': 0.3895833194255829, 'y': 0.6722221970558167}, {'x': 0.4635416567325592, 'y': 0.5925925970077515}, {'x': 0.5979166626930237, 'y': 0.4888888895511627}, {'x': 0.6979166865348816, 'y': 0.4148148000240326}, {'x': 0.7562500238418579, 'y': 0.442592591047287}, {'x': 0.7541666626930237, 'y': 0.5388888716697693}, {'x': 0.8166666626930237, 'y': 0.4314814805984497}, {'x': 0.7927083373069763, 'y': 0.3314814865589142}, {'x': 0.768750011920929, 'y': 0.25925925374031067}, {'x': 0.746874988079071, 'y': 0.5981481671333313}, {'x': 0.778124988079071, 'y': 0.43703705072402954}, {'x': 0.7260416746139526, 'y': 0.3203703761100769}, {'x': 0.706250011920929, 'y': 0.27222222089767456}, {'x': 0.703125, 'y': 0.6499999761581421}, {'x': 0.7291666865348816, 'y': 0.4611110985279083}, {'x': 0.6666666865348816, 'y': 0.3722222149372101}, {'x': 0.628125011920929, 'y': 0.3351851999759674}, {'x': 0.6416666507720947, 'y': 0.6981481313705444}, {'x': 0.6864583492279053, 'y': 0.5814814567565918}, {'x': 0.6625000238418579, 'y': 0.5092592835426331}, {'x': 0.6447916626930237, 'y': 0.4592592716217041}]}
from ax import pipeline
import time
import threading
def pipeline_data(threadName, delay):
time.sleep(0.2) # wait for pipeline.work() is True
for i in range(400):
time.sleep(delay)
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
for i in tmp['mObjects']:
print(i)
pipeline.free() # 400 * 0.05s auto exit pipeline
thread = threading.Thread(target=pipeline_data, args=("Thread-1", 0.05, ))
thread.start()
pipeline.load([
b'libsample_vin_ivps_joint_venc_rtsp_vo_sipy.so',
b'-p', b'/home/config/hrnet_animal_pose.json',
b'-c', b'2',
])
thread.join() # wait thread exit
{'label': 14, 'prob': 0.6244175434112549, 'objname': b'bird', 'bbox': {'x': 0.4825528562068939, 'y': 0.3995664715766907, 'w': 0.24243469536304474, 'h': 0.28279656171798706}, 'bHasBoxVertices': 0, 'bHasLandmark': 20, 'landmark': [{'x': 0.6266645789146423, 'y': 0.37942758202552795}, {'x': 0.6266645789146423, 'y': 0.37942758202552795}, {'x': 0.6039101481437683, 'y': 0.3660016655921936}, {'x': 0.7025129199028015, 'y': 0.32572388648986816}, {'x': 0.5015149116516113, 'y': 0.33243685960769653}, {'x': 0.5963252782821655, 'y': 0.37942758202552795}, {'x': 0.5470238924026489, 'y': 0.5539646148681641}, {'x': 0.4863452613353729, 'y': 0.42641833424568176}, {'x': 0.4863452613353729, 'y': 0.6076683402061462}, {'x': 0.6987205147743225, 'y': 0.6143813133239746}, {'x': 0.5735708475112915, 'y': 0.5606775879859924}, {'x': 0.5963252782821655, 'y': 0.5472516417503357}, {'x': 0.4863452613353729, 'y': 0.6882238984107971}, {'x': 0.5394390821456909, 'y': 0.33243685960769653}, {'x': 0.49013766646385193, 'y': 0.668084979057312}, {'x': 0.49013766646385193, 'y': 0.6882238984107971}, {'x': 0.4863452613353729, 'y': 0.6882238984107971}, {'x': 0.4863452613353729, 'y': 0.6747979521751404}, {'x': 0.4863452613353729, 'y': 0.6747979521751404}, {'x': 0.5053073167800903, 'y': 0.35257574915885925}]}
import time
from ax import pipeline
from PIL import Image, ImageDraw
# ready sipeed logo canvas
lcd_width, lcd_height = 854, 480
img = Image.new('RGBA', (lcd_width, lcd_height), (255,0,0,200))
ui = ImageDraw.ImageDraw(img)
ui.rectangle((20,20,lcd_width-20,lcd_height-20), fill=(0,0,0,0), outline=(0,0,255,100), width=20)
logo = Image.open("/home/res/logo.png")
img.paste(logo, box=(lcd_width-logo.size[0], lcd_height-logo.size[1]), mask=None)
def rgba2argb(rgba):
r,g,b,a = rgba.split()
return Image.merge("RGBA", (a,b,g,r))
canvas_argb = rgba2argb(img)
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/yolov5s.json',
# '-p', '/home/config/yolov8.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
argb = canvas_argb.copy()
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
ui = ImageDraw.ImageDraw(argb)
for i in tmp['mObjects']:
x = i['bbox']['x'] * lcd_width
y = i['bbox']['y'] * lcd_height
w = i['bbox']['w'] * lcd_width
h = i['bbox']['h'] * lcd_height
objlabel = i['label']
objprob = i['prob']
ui.rectangle((x,y,x+w,y+h), fill=(100,0,0,255), outline=(255,0,0,255))
ui.text((x,y), str(objlabel))
ui.text((x,y+20), str(objprob))
pipeline.config("ui_image", (lcd_width, lcd_height, "ARGB", argb.tobytes()))
pipeline.free()
print_data 2 False
import time
from ax import pipeline
from PIL import Image, ImageDraw
# ready sipeed logo canvas
lcd_width, lcd_height = 854, 480
img = Image.new('RGBA', (lcd_width, lcd_height), (255,0,0,200))
ui = ImageDraw.ImageDraw(img)
ui.rectangle((20,20,lcd_width-20,lcd_height-20), fill=(0,0,0,0), outline=(0,0,255,100), width=20)
logo = Image.open("/home/res/logo.png")
img.paste(logo, box=(lcd_width-logo.size[0], lcd_height-logo.size[1]), mask=None)
def rgba2argb(rgba):
r,g,b,a = rgba.split()
return Image.merge("RGBA", (a,b,g,r))
canvas_argb = rgba2argb(img)
pipeline.load([
'libsample_vin_ivps_joint_vo_sipy.so',
'-p', '/home/config/ax_pose.json',
# '-p', '/home/config/hand_pose.json',
# '-p', '/home/config/yolov5s_face.json',
'-c', '2',
])
while pipeline.work():
time.sleep(0.001)
argb = canvas_argb.copy()
tmp = pipeline.result()
if tmp and tmp['nObjSize']:
ui = ImageDraw.ImageDraw(argb)
for i in tmp['mObjects']:
if i["bHasBoxVertices"]:
points = [ (p['x'] * lcd_width, p['y'] * lcd_height) for p in i['bbox_vertices']]
ui.polygon(points, fill=(100,0,0,255), outline=(255,0,0,255))
else:
x = i['bbox']['x'] * lcd_width
y = i['bbox']['y'] * lcd_height
w = i['bbox']['w'] * lcd_width
h = i['bbox']['h'] * lcd_height
ui.rectangle((x,y,x+w,y+h), fill=(100,0,0,255), outline=(255,0,0,255))
for p in i["landmark"]:
x, y = (int(p['x']*lcd_width), int(p['y']*lcd_height))
ui.rectangle((x-4,y-4,x+4, y+4), outline=(255,0,0,255))
pipeline.config("ui_image", (lcd_width, lcd_height, "ARGB", argb.tobytes()))
pipeline.free()
!ls home/images
air.jpg carvana02.jpg face5.jpg o2_resize.jpg ssd_car.jpg
aoa-2.jpeg carvana03.jpg grace_hopper.jpg pineapple.jpg ssd_dog.jpg
aoa.jpeg carvana04.jpg mobileface01.jpg pose-1.jpeg ssd_horse.jpg
bike.jpg cat.jpg mobileface02.jpg pose-2.jpeg
bike2.jpg cityscape.png mtcnn_face4.jpg pose-3.jpeg
cable.jpg dog.jpg mtcnn_face6.jpg pose.jpg
carvana01.jpg efficientdet.png mv2seg.png selfie.jpg
from PIL import Image, ImageDraw
pil_im = Image.open('home/images/bike2.jpg', 'r')
draw = ImageDraw.Draw(pil_im)
draw.arc((0, 0,400,400) , start=0, end=300, fill='red',width=3)
draw.rectangle((20, 20, 200, 100), fill=(100, 20, 60), outline="#FF0000", width=3)
pil_im.show() # display(pil_im)