Openframeworks 007

there is a good instruction of openframeworks 007 in

Anti-Aliasing in openFrameworks

Different with Processing’s smooth(), the ofEnableSmoothing() could only works on line, but not shape or 3D objects, in order to anti-Aliasing in openframeworks, we could add this code in main
window.setGlutDisplayString(“rgba double samples>=4 depth”);  

Max&MSP Jitter Learning 1 – 3D

In this blog, I would introduce some basic points of Max&MSP 3D works and the effects of the project. According to the theme “Fragment”, I would focus on the particle system in Max&MSP.

As for 3D works, firstly, we should know most of the 3D data should be save in “jit.matrix” object, and then showed in “jit.pwindow” or “jit.window”.

To learn matrix, the most two basic concepts are “dim”(dimension) and “panel”. For example, we could make a 2*2 or 2*3*4 matrix which may could describe x*y (2d, 2 dimensions) or x*y*z (3d, 3 dimensions) works. The 3*3 matrix has 9 cells, and each one contains some “panels”. User could define the counts of panels, but for most cases, for graphic works, the panels contain the color data. For instance, four panels could presents alpha, red, green, blue values.

For lots of beautiful artworks, particle system is the key point. In Max&MSP, we could use “jit.p.shiva” and “jit.p.vishnu” to produce particles.

First, jit.p.shiva need a N*2 matrix as input, in each cell, there 5 panels. N means the maximum of particles, I still don’t understand what’s the 2 means, but in the test, when N = 2, the cell [X, 1]’s value equals to the content of the next [X, 0], so I guess the [X, 1] saves the next position of the [X, 0] particle.

As for the five panels, the first one usually indicates the ID of particle, and the second one relates to each particle’s life, the last three one is the position’s X, Y, and Z value. Other attributes would be introduced in the following blog, but not this one.

Then, most of 3D work would be produced by “” object. The simplest architect could be seen in last patch, using “t”(trigger) to output any input received in order from right to left to output “matrix data”, “bang” and “erase” signals from right to left. The erase signal is used to refresh the background of the painting.

Most shape could use to create.

Using jit.gradient could make gradient effect.

The could used to render a Non-Uniform Rational B-Spline (NURBS) surface. In the project, the animation of each particles could be made by this object.

The could draw text in 3d space. In this patch, I use text3d to make the effect that the text was include in the fragments.

Use to link with could make the window’s perspective controlled by mouse.

Using jit.iter could get literate through all the cells of a matrix, sending a max message or list for each cell out the object’s left outlet. The silly method I used to adopt is use getcell to get the content and use setcell to rewrite the content. Before that, there also should be some control architecture to produce a loop to produce the coordinate of the matrix.

Max&MSP could use lots of patches open source to control the personal patches. For example, in this piece, we could use cv.jit to use face tracking or image tracking to control the orient of all the particles.

Finally, the effects of this piece would be:

(some skills could be learnt from

Besides, as I am used to the traditional oriented object programming method, the visual programming interface is quite hard for me to handle. Beside, even Max&MSP serves message box to realize some debug function, it still not convenient to do some value computing. So as I thought, in the following days, I could try to use processing or C++ to make the value computing and use OSC to send messages to Max&MSP. The advantage of Max&MSP is the strong ability to serve lots of rendering effects, and other functions complex to use programming language to do, such as camera, perspective, light, etc. Also, as the tutorial, Max&MSP is convenient to communicate with Java and JavaScript.

Max&MSP Jitter Learning 1 – 2D

Before introduce the project fragment, I’d like to write something about the skills of graphic works in Max&MSP Jitter I have learned in the process to complete the whole project as a review.

As for learning materials, frankly speaking, I didn’t found much online tutorials, so the most important three things are:

  1. Tutorials in Max&MSP Helps, there are lots of patches and introductions in it.
  2. Once opened the patches in tutorials, turn to “edit” mode, right click on the objects in patches, the help patches would introduce more.
  3. Reading the reference of the objects in details, almost all the attributes and methods to use the objects could be found here.

2D works

To complete a graphic work, the basic elements are shape, color and a place to draw.


In Max&MSP, the shape could be drawn by:

As most of graphic tools, Max&MSP could draw line, ellipse and rectangle that represented by “linesegment”, “oval”, “rect”, respectively as the picture showed.


As for color, using frgb message (foreground RGB) could select color for painter as the picture showed:

Place to draw

The 2d painting could be presented on “lcd” object, more details could be found in its reference documents.

Media in public Assign 7


The first article focus on an interesting question “what are we producing”, different needs, and how to deal with the differentiation between the process of production and usage of material. The second one is a representative idea of “little change save the world”, just change the google’s background to black would save lots of power, amazing! The third article related to the topic “materialism” and let me come up with some interesting examples too, but still need some time to get familiar with this part.

And as a quick development of the streetlight (unnoticed thing) if I get some solar devices and needn’t worry about the power, I come up with to install some pray wheel on the body of streetlight (to make the streetlight’s body part don’t totally disappear because the bulb draws too much attention).

The pray wheel on the streetlight needn’t use hand to make it move, but it could detect you and start to run automatically. When it is shinning in the sunshine, how beautiful they would be? Especially there lots of streetlights.

Media in Public Assign 8

Sky message board

When we are looking at the sky upon this city through our mobile phone’s camera, what could we would we like to see? I asked some people “If you could write a short sentence in the sky and everyone living in newcastle could see it, what will you write? what the colour was it?” (Thanks Jamie and Michael’s advice, I’ve tried michael’s advice like Red Building, actually that part is really interesting, but I’m still suspect if I could do that – because I’m not sure the speed of the cellphone could deal with it and the control part, like how to build it also puzzled me :-), but thank you very much!)

Then what I got are:
1. Hello world! (Several answer like this came from a programer group…)
2. I wish XXX love me forever! (Pink)
3. I could defeat everything! (Golden)
4. I wish everyone seen this word could find their happiness. (Red, Big Smill)
5. Be humorous and relax, you could win~ (Red)
6. I love Newcastle!
7. Keep your head in the clouds. (Purple)
8. Keep calm and carry on. (Yellow)
9. Have a lovely day 🙂 ! (Yellow)
10. I will be a star! (Orange)
11. Stay positive 🙂 (Pink)
12. Peace! (White)
13. Olive! (White)
14. Love (Red)
15. Happy Mothers Day~ (Pink)
16. Love and Peace (Purple)
17. Love and Peace (Pink)
18. I wish it as sunny and warm today. (Yellow)
19. We are all under the same sky. (White)
20. The sky is beautiful every time. It doesn’t mattered is sunshine or rain. It’s something what I like. The Sky always makes me happy. I don’t think about my troubles, problems. When I am looking on the sky, I feel free and full relaxed.
21. I’m leaving you (Depends on the light)

Then I think the sky message board could have 2 function, the second one is based on the first one.

1. Send message for special people.
One day, Marry received a message from her boyfriend, “Look at the sky over the city centre!”(it could be also the location of the sender)  She ran the app “Magic city”, used the camera of the phone to look at the sky, and found the big “I love you” made by the clouds floating on the sky, like this:

(The same with the original idea)

Also, their some pictures of skywriter, the effect would be similar.


2. For everyone (Optional)
One day, Pengfei is taking photo for him self alone, like this:

Isn’t it a little silly?
Now, we could do this:
1. The cellphone would judge if you are angry or smile (or just input the keywords, like “smile”, “angry”, or maybe “food”)
2. Based on the sky message board online data, we could get a photo like this~

Is it great?
This idea is similar to the danmaku video like this:
and the poster like this:

(To make it here:

Also I got a interest point, it is likely when you are taking photos outside of a restaurant, the word cloud would be “Delicious!””Swallow my tongue!” or “Peaceful”. From commercial point, it could also be developed.

Everything needed are a mobile phone and net access, I’m still working on how could the software system work, like:

1. Import the Google earth map (Use the 2D model or 3D model like what streetview use?)

2. Calibration (make the map and the view calibrated)

3. Send and Receive message. (based on Twitter or?)

4. (Optional) Use wordnet to adapt a simple classification method to make every message sorted by “happy” and “unhappy”, based on the facial expression recognized by cellphone (if possible) or the keywords imputed “smile” to the cellphone.

5. (Optional) How to deal with the color of the sentence, changed time by time based on the sunlight? still? simple animation? Transparent?

Three ways to use Kinect in processing

I have got 3 ways to use Kinect, 1 is based on libfreenect, 2 are based on OpenNI.



Shiffman’s library:


1. Use processing wrapper

2. Via OSC in processing (Max&MSP) to read the data produced by OpenNI

This method could makes OpenNI applied in many platform because OSC (Open Sound Control) is widely supported now.

Installing OpenNI, Kinect drivers and NITE

Recently, some friends on the forum always ask questions about how to build the environment to use Kinect, so here I want to conclude some points.


As for windows, although there are lots of articles to describe it, there are still three points easily to be ignored but lots of problem caused them.

1.     OpenNI could only work well in 32-bit system until now.

2.     The sequence is important, first, OpenNI, second, KinectSensor (Driver), last, NITE. The versions must be matched.

3.     Don’t edit XML file using wordpad, it could arouse some problems related to the Unicode and UTF-8.

Most questions left could be tackled by reading the Guide in the folder “Documentation”.


This article would guide you get the target step by step, The readme file of KinectSensor also introduced lots.

Problem maybe happened when we are installing Macports, after we complete the install work, we should also selfupdate it first, or it may notice you:

Warning: No index(es) found! Have you synced your source indexes?

And the command “sudo port install libtool” could not run, this problem are aroused by the firewall (maybe belongs to the school), and we couldn’t use svn to do the update work. We could tackle it in this way:

In the last step, we should edit the source.conf, I’m new in Mac, so I’m not sure is there any tools other than “vi” in Terminal could do it. Don’t worry, “vi” is complex (and great), but for our task, it’s quite easy. First, enter the “insert mode”(vi start in “control mode”, press S to enter “insert mode”), second, edit, last, Save and quit(press esc to quit insert mode and enter control mode, input “:wq” to save and quit, input “:q!” to quit without saving).

Last, as for me, the samples could not run just by clicking their icon, but input ”./filename” in Terminal to run.

Good Luck!

PS: In Terminal, when you input password, nothing would show you, even “*”, but don’t stop, just input and press enter.

Be careful when you rename your file or folder, because if there is a space followed the name, when you input the name (without the last space), the machine would never found it.

Processing Note 1: Random and Color

From this day, I would select some examples which I thought important and comment them.

1. Random and Color

I thought for most freshman as me, processing means use a lot of object to make the output as magnificent as possible, which means we should use some function to build them, to manage them and make them different. So let’s begin at “random” and “color”

P206 Processing – Shiffman Example 13-3: Probabilities

void setup()







void draw() {

// Probabilities for 3 different cases // These need to add up to 100%!

float red_prob = 0.10; // 60% chance of red color

float green_prob = 0.60; // 10% chance of green color

float blue_prob = 0.30; // 30% chance of blue color

//The first random(high) is used to get a number between 0 and 1.

float num = random(1); // pick a random number between 0 and 1


// If random number is less than .6

if (num < red_prob) {

// Once the color is decided, we could also change it a little

// Use 255*random(0.6,1) to make it change from “slightly” to “very”

//fill(value1, value2, value3, alpha), alpha means opacity of the fill

//alpha means the opacity of the fill, from 0 to 255.


// If random number is between .6 and .7


else if (num < green_prob + red_prob) {


// All other cases (i.e. between .7 and 1.0)


else {










For the freshman in processing as me
This blog give several very different examples to show us how to use the knowledge on math and physics to create something really gorgeous. I plan to learning what’s the thinking method in processing, what do I want? How could I get it? How to make the big project into small steps and how to structure it?

Return top