P5js is a javascript library that enables media presentation, interactive drawing as well as user input in an area of a web page defined as the CANVAS.
I’ve gotten it to work with the jetson nano since p5js is a javascript library, but only as long as only primitive drawing functions are used. I cannot use the predefined constant, VIDEO, to access the camera.
The p5.js code I use works fine on a raspberry pi
Here is my html:
<!doctype html>
<html>
<head>
<script src=“p5/p5.js”> </script>
<script src=“p5/addons/p5.dom.js”> </script>
<script src=“sketch.js” ></script>
</head>
<body style=“margin: 0; overflow: hidden;”>
</body>
</html>
and here is my sketch.js
let capture;
function setup() {
createCanvas(390, 240);
capture = createCapture(VIDEO);
capture.size(320, 240);
capture.hide();
}
function draw() {
background(20);
image(capture, 0, 0, 320, 240);
filter(INVERT);
}
This puts up a grayscale video of what the camera is seeing.
However, it does not work on the jetson nano. All i see is a blank.
My camera on the nano is working fine.
This puts up a huge viewer that shows a live stream of what the camera sees
gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink
When I run the code and look in the console , it shows this comment
p5.js says: createCapture() was expecting Function for parameter #1 (zero-based index), received an empty variable instead.
So clearly, the constant, VIDEO, is not pointing to the the video stream.
I tried replacing VIDEO with nvarguscamerasrc but the console says this is an undefined variable.
I looked in the file, p5.js, and found many classes of objects that use the constant, VIDEO.
The constant is defined in a separate file that is part of p5.js and is named p5.dom.js.
p5.prototype.VIDEO = 'video';
However, I cannot determine how VIDEO finds the proper hardware. nor do I know how to tell p5.js to look for the camera on the nano.
I submitted a similar post on the jetson nano support forums
and the mod was kind enough to respond but I think English is not his native language. Still he clearly speaks it better than I speak JAVA.
I posted on the processing forums
but I think that is more of an artsy place than hardcore hardware hacking.
So the issue is getting javascript to talk to the hardware on the nano. There is a python library for the nano and its camera here:
and in the (/simple_camera.py) script there is a class named, gstreamer_pipeline, which sets up the camera. However, openCV is needed. I think that I need to make a javascript library that does something like this so I can use it in my p5js sketch (that’s what a javascript file that implements p5.js is called). But here’s where I lose confidence and am lost.
Why use javascript and not hack directly on the nano? Well, having self contained code in javascript will appeal to a much greater audience who are already leveraging p5js to make interactive web pages. There is even a tensorflow.js that allows ml on video that a camera captures. I made a little app that turns my webcam video into a live XKCD comic.(on a regular x86 linux box) It’s rather trivial and I learned the details from here:
But can you imagine an intro to Wendell’s podcast where the three hosts are stick figures talking on a whiteboard? Way better than deepfakes.