Tag Archives: HTML 5

Pixel Pushing with Canvas

Today we are going to explore what I think is one of the coolest features of the canvas tag. The ability to extract and manipulate pixel data. This is an amazing feature to canvas that will help developers create all sorts of cool applications from image manipulations to code driven artwork.
I think this graphic sums it up…


I wanted to make a special graphic for this post so I cracked open Blender and came up with this HTML5 rocks 3D scene. The idea was to have a very distinct difference between the range of colors so that I could detect when I was over a orange pixel versus a grey pixel. For a surprise hover over any orangish pixel in the scene. See it in action.

Loading an image onto the Canvas

In order to get the pixels from our image we will first need to load it onto the canvas. This process is pretty simple. First we need to create some canvi : ) (Is that a word?)

Next, we will need to load our image and shove it into the “rocks-canvas” element. Upon load we will fire an init function. Init will setup references to our canvas and canvas context. The main magic is line 11. We can take our loaded rocksImg and use the drawImage function to draw it directly to the canvas context. If you recall we used this drawImage function in our canvas spritesheet example as well.

Mouse Movin’

Once we have our image drawn we need to sample pixels based on the mouse movement. As you can see from our previous code snippet the last line is setting up the onMouseMove event so we should be ready to go. FYI, in order to keep it simple we are going to condense down the code from the working example. The full source is at the bottom if you want all the bells and whistles.

Ok so we first get ev.offsetX and ev.offsetY which will give us our mouse position. Next, we call getImageData from our canvas context and pass in the x and y coordinates followed by a width and height which in our case we want a 1px x 1px sample size. Then we call the data attribute. This will get our image data into a byteArray. The pixel data is stored in this byteArray in RGBA format. This means that our resulting pixels values in the byteArray are pixel[0] = RED, pixel[1] = GREEN, pixel[2] = BLUE, pixel[3] = ALPHA. RGB values range from 0 to 255 so each of these values returned from the byteArray will be within those ranges. The last thing we can do is check each of the RGB values for a range of colors and we could trigger something on the page if the user is on a certain color. (Example Lines 13 – 19)

Tip: If you want to see what the value will look like just pull up the color picker in photoshop.


Accessing canvas pixel data opens up many possibilites for cool and interesting projects. As always I would be interested in any cool applications of these techniques. Feel free to post a comment or link.

Source links.
Code Source
HTML5 Blend file. If you want to change the colors in the 3D file… : )

Three.js + Blender Part 2

At this point we should have a JavaScript file that represents our model. Let’s investigate how to get this model into our Three.js scene.

Three.js Basics

In Three.js there are a few basic things you need to setup a 3D scene. You will need a scene (duh!), camera, renderer, and an animation loop.

We will be setting up an init function that will handle most of the setup. First we need to define variables for the main container, camera, scene and renderer. Next we create a new THREE.Camera. The parameters are as follows. (fieldOfView, aspectRatio, near, far, target). Note: The target parameter is optional in our example and near and far are parameters that refer to the clipping plane that should be used when rendering.

After the camera is ready to go we need a Scene to render in. This is as easy as creating a new THREE.Scene. Now since our eyeballs are a different mesh than our base mesh for our bird body we will need a group to put all the 3D objects in. This is so that we can change the group properties and the whole bird object will move within the same coordinate system.

Creating objects

Now that we have the scene and camera setup we can start adding objects to our scene. Since we have exported our main mesh out as an external JSON model we will need a way to import it into our scene. Enter the THREE.JSONLoader function. All we need to do is setup a new THREE.JSONLoader and then call the load function. Once our bird.js file is loaded the callback function will be executed and the loaded geometry will be passed to it. In the callback function we can create a new THREE.Mesh, set the position, rotation, scale, and finally add the newly created mesh to our group using the addChild method.

One thing to note… The scale at which the model is exported seems to be really small. I think this is due to the size of Blender units internally. However I am not sure on this. Anyway, you will need to bump up the scale in order to see your exported model. Here we are setting the scale to 700.

The last thing to note here is that we are setting up a material with a textured image to “map” to the uv coordinates exported from Blender. If you are not familiar with UV coordinate mapping click here. UV Mapping allows for a 2D image to be “wrapped” onto an object in 3D space. This is particularly useful for texturing. Three.js comes with a map object you can then set to your textured image. Every major 3D program should come with UV mapping capabilities.

Rendering / Animation Loop

The last thing we need to do is actually render this out. Since we want to use WebGL for this example we will need to setup a new THREE.WebGLRenderer, set the width/height, and append it to the container DIV element we created above. That’s it for the renderer.

Lastly we need a loop to update and animate the model. As you can see we are once again using our friend requestAnimationFrame to accomplish this. If you need more info on RequestAnimationFrame go here. So we request an animation frame from the browser and then we call the render function passing in our scene and camera. Simple stuff. Before we render we could update properties of our group as seen in our commented line.


So as you can see it’s pretty simple to get complex models up and running with Three.js. I have attached the source if anyone would like to investigate further. Source.

Once again here is the final product. Mr. Bird Preview

Three.js + Blender 2.5 = FTW

For anyone who has played with Away3D in Flash inevitably you hit a moment where you go man this is cool but these 1k poly counts are really frustrating. This is due to the fact that until recent beta demos Away3D had no access to the GPU and was dependent on a completely software based solution. I always kept thinking in the back of my mind man wouldn’t it be nice to be able to export a reasonably low poly model from Blender, Maya, (insert 3D program of choice) and not have to worry as much about poly counts or uv’s, etc. Enter WebGL. Now supported in Chrome, Firefox 4.0, and Safari nightly betas WebGL is a Javascript based wrapper using canvas and some magic to talk to the GPU via OpenGL ES 2.0. More Info


Here is a preview of what we will be building. Mr. Bird Preview


Now this is where it gets really cool… Three.js is a library that makes this whole process super simple and quite similar to anyone who as used Away 3D. For the purposes of this post we are going to concentrate on the process of getting a model exported from Blender and into Three.js. If you need more info and other examples please check out the project on GitHub. Here

Installing Three.js plugin for Blender.

Let’s get it started… We will need to get the Blender exporter script installed. When downloading and extracting the Three.js project there should be a “utils/exporters” folder. Look for your version of Blender (Either 2.56 or 2.57) and install the Three.js plugin using the instructions provided here.

Exporting a Model from Blender

Now that we have the plugin installed we can fire up a model to test with. I used a bird model that I got from the Big Buck Bunny project. Here (bird.blend) This model had 32k faces so I thought it would be a good test for performance.

Now the magic. File -> Export -> Three.js. That’s it! Blender will export a model in JSON format that is ready to use within Three.js with Geometry, UV’s and materials.

In Part 2 we will explore how to actually get this exported JSON model working in Three.js.

1 2