Recently I am attempting to modify the source codes of this page. The underlying technique of this interactive programe is called sketch-rnn, a deep learning algorithm that can generate sketches. I need to access the real time images on the canvas so that I can use convolutional neural network (CNN), and feed the image as a 2d array to the neural network so that I can further improve the programe. Is there any p5.js function that can help me achieve that?
It depends in what format the CNN accepts input.
<canvas /> element.
For example this is something you can try in your browser console on the sketch_rnn_demo page:
// access the default p5.js Canvas canvasElement = document.querySelector('#defaultCanvas0') // export the data as needed, for example encoded as a Base64 string: canvasElement.toDataURL()
//access <canvas/> context var context = canvasElement.getContext('2d'); //access pixels: context.getImageData(0,0,canvasElement.width,canvasElement.height);
This will return a 1D array of unsigned 8-bit integers (e.g. values from 0-255) in R,G,B,A order
You can also use get(x,y) in p5.js which allows a 2D way to access to pixel data, however this is much slower.
If you CNN takes in a 2D array you still need to create this 2D array yourself and populate it pixel values (using
get() for example). Be sure to double check the CNN input:
- it is a 2D array of 32-bit integers (e.g. R,G,B,A or A,R,G,B as a single int (0xAARRGGBB or 0xRRGGBBAA), just RGB, etc.)
- what resolution should the 2d array be ? (your sketch-rnn canvas may be a different size and you might need to resize it to match what the CNN expects as an input)
I’ve just re-read the question and realised the answer above has half of the answer. The other half about sketch-rnn is missing.
(I happen to have worked on a cool sketch-rnn project in the past)
Personally I believe the question could’ve been phrased better: the CNN part is confusing. My understanding now is that you have a canvas, probably from p5.js and you want to feed information from there to sketch-rnn to generate new drawings. What still isn’t clear is what happens to this canvas: is it something you generate and have control over, is it a simply loading some external images, something else ?
If the input to sketch-rnn is a canvas you would need to extract paths/vector data from the pixel/raster data. This functionality moves away from p5.js into image processing/computer vision and therefore not built into the library, however you could use a specialised library like
OpenCV.js and it's findContours()
I actually started a library to make easier to interface between OpenCV.js and p5.js and you can see a basic contour example here. To access the contours as an array of
p5.Vector instances you’d use something like
myContourFinder.getPolylines() to get everything or
myContourFinder.getPolyline(0) to get the first one.
It’s also worth asking if you need to convert pixels to paths (for sketch-rnn strokes) in the first place. If you have control over how things are drawn into that canvas (e.g. your own p5.js sketch), you could easily keep track of the points being drawn and simply format them in the sketch-rnn stroke format.
In terms of using sketch-rnn in js, the sketch-rnn demo you’ve linked above actually uses p5.js and you can find more examples on the magenta-demos github repo (basic_predict is a good start).
Additionally, there’s another library called ml5 which is a nice and simple way to make use of modern machine learning algorithms from p5.js including sketch-rnn. As you can see on the documentation page, there is even a ready to remix p5.js editor sketch
Unfortunately I won’t have the time to put all the above together as a nice ready to use example, but I do hope there is enough information on how to take these ingredients and put them together into your own sketch.
Answered By – George Profenza