A Live2D pose capturer built on the top of JetBot, with MediaPipe and KalidoKit.
The project consists of two components: image processor backend with MediaPipe, and Live2D model viewer with Kalidokit, two components, together with JetBot, are connected with WebSockets.
The main concept of live2d-pose
is to connect JetBot and browser with WebSocket, backend landmarks image from JetBot, then send result to browser.
Backend is based on MediaPipe Face Landmarker by Google. The following code shows basic usage of MediaPipe Face Landmarker:
option = FaceLandmarkerOptions(
base_options=BaseOptions(model_asset_path=model_path),
running_mode=VisionRunningMode.LIVE_STREAM,
result_callback=self.callback,
)
with FaceLandmarker.create_from_options(option) as landmarker:
mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=image) # numpy.ndarray
landmarker.detect_async(mp_image, timestamp_ms)
KalidoKit is a blendshape and kinematics calculator. Originally KalidoKit accepts landmarks directly from MediaPipe on browser, but we seperate it to combine Live2D model bind and face tracking on JetBot.
Using KalidoKit is also simple. You can quick start once you get MediaPipe landmark result from WebSocket:
const riggedFace = Face.solve(points, {
runtime: "mediapipe",
imageSize: {
width: 640,
height: 480
}
});
// Control Live2D model...
You need to store Live2D model (.model3.json
) on a browser accessible place.
- Clone git repository:
git clone https://github.com/zeithrold/live2d-pose
- Create virtual environment with
python -m venv venv
and install dependencies:pip install -r requirements.txt
Note: Mediapipe don't provide wheels for aarch64 architechture
- Download MediaPipe model from Direct Link, save it to
mp/models/
- Locate to
mp
directory, Runpython app.py [...args]
- Run custom camera capturing script connecting to backend machine. (We provided an example script for local machine using webcam.)
- Open link displayed on backend, and enjoy.
Args for app.py
is as follows:
--ip IP, -i IP server ip to bind
--port PORT, -p PORT server port to bind
--model MODEL url where stores Live2D model (.model3.json)
--frontend FRONTEND url where serves KalidoKit frontend
If you pretend to serve KalidoKit fronend on local machine(e.g. SimpleHTTPServer), DON'T FORGET to set CORS properly.
For Live2D model url, a possible example is http://example-s3.com/live2d/zeithrold.model3.json
.
Due to latency issue, we HIGHLY RECOMMEND connect JetBot and backend in a local network.
- When image capturer connects backend first, backend will create a MediaPipe thread. When it completes, backend will send a JSON message:
{"type": "initialized"}
. - Then it's time for image capturer to send image size (width, height) in JSON format like
{"imageSize": {"width": 640, "height": 480}}
, when accepted, backend will send a JSON message:{"type": "ok"}
- The backend will print a URL for web browser(or OBS), when page is prepared, let image capturer send binary OpenCV-Encoded (supports compression) like
cv2.imencode(".jpg", img, params)[1].tobytes()
image data, then all process will begin.
Note that the frequency should not too fast, otherwise will cause no time left for backend to send Keep-Alive ping, resulting connection crash.