porno film
sikiş videoları
porno izle

Other thoughts: failures are fortunes

My project is far more difficult than I thought.
I didn’t finish, and I think it will still take a long journey. The other thing is even though I finish, the final effect will be very poor because of the low resolution and very slow speed.
I think those errors and detours are also fortunes for me, not only in the project itself, but also in other ways.
Right thing doesn’t mean it’s a good project, doing something slightly wrong might create unintentional results. I can’t find an example in my project, but there’s other example: in a fighting turn-based game, everyone will think a quick attack at the beginning is good, however, sometimes the winners are the low speed characters.
Here are summary of some useless attempts I practiced with Processing and Kinect so far:
Useless attempts in Processing:
Smile Detection:
I didn’t have any computer background. I only spent 1 month learning Processing (a easy to start book written by Daniel Shiffman). So I was looking for some useful examples on Github and OpenProcessing. Then I found OpenCV, which is really useful, I borrowed some codes into my project.
Then I found a library called PSmile, which is a smile detection library, built by Bryan Chung. I tried every method to install the library, however, it didn’t work. Because the version is too low. Bryan launched the PSmile library in 2011 for Processing 1.0 version, Processing 2.2.1 version was released in 2014.
Also I found the “version problem” is very common in my later practices.
For example, SimpleOpenNI library 1.98 is not fit in Processing 2.2.1. Some CODENAME is not exist.
Also, recently (14 August) Processing released its 3.0 version for MAC. Then I downloaded it, then OPENGL can not adapted to it.
So…. a lot of time was waste in this way…
I was thinking to imitate normal photography environment. Because nowadays people accustomed of using built-in filters camera taking pictures directly.
However, the beautiful effect filters (not the built-in Processing ones) extreme slow down the running speed.
Then I quit. But I still could imaginate even the final project has been created, the live video will be very annoying.
Light source:
I found a very good example in OpenProcessing forum, and I made a little change to let it more relate to my project.
The final result as I post the video on YouTube is “Sunrise and Sunset on a face”.
My topic is to see how face changes with light, however, 2D images didn’t express the whole idea I want to give, so I uses the Kinect. And the video is my attempt in Processing the 2D interpretation way.
Before I edited the code, I was thinking use the live video to see the light changes. However, it runs very very slow, the video is like stuck even with the low frameRate (also the frameRate will constrain the fluency of the video.) So I use a portrait picture from my friend. The image didn’t fully translate the effect I want, but it’s still a good practice for me.
Begin studying the Kinect is also a challenge for me.
I spent a huge time in practicing with Kinect.
In my initial plan (what I tought is too simple), the first step is to create/constrain virtual light area with Kinect DepthCamera, the second step is to create little bright sphere like the light. However this step took up my main effort and still have no way to resolve, so I’m still working on this part. Also, the updated version problems make me hard to move one step further.
I spend a lot of time on looking for examples, I found sevral possiblie solutions: Structured light, Projection Mapping and Graffiti. However, though it’s an opensource environment, many codes are still confidential.
Also, to create virtual light is not simple, for example, what kind of light? Round shape? Spherical light like sunrise? And the main question is I want to make the virtual light look like real light in physical world.
Furthermore, I’m rethinking the whole structure I set for myself, how could I make sure these two independent pragrammes can be merged into one piece of work? I’m not sure. Maybe at first it’s a wrong direction…
As I mentioned in the beginning, failures are also fortunes, in the other part, I’m rethinking this summer work:
a. I should face the real ablities I have (at first I over estimated my ability and thought the opensource is real “open” however many of them are confidential);
b. build a suitable timeframe (Now I’m crazing in doing my dissertation and project at the same time.)
c. even though I was work hard, but a right direction is more important.

Practice 7 – virtual light in the Kinect (failed)


I really like the “sunrise and sunset light on a face” idea and the light effect.

Then I borrow my friend’s picture (because the last one might offend other’s copyright) and make a new piece of work.

I tried to use the live video to see the light changes.
However, it runs very very slow, the video is like stuck even with the low frameRate (also the frameRate will constrain the fluency of the video.)
So I use a portrait picture from my friend. The image didn’t fully translate the effect I want, but so far it’s the only thing I feel a little pleased.

However, 2D images didn’t express the whole idea I want to give, but I like the light effect, so I tried to apply the spherical light into the Kinect.


I made as much as effort I can, but I failed:

a. in depth information, create virtual light (spherical type) in the constrained space (as shown in the picture), the result is not obivious.

b. in Point Cloud, create spot light (draw several sphere and highlight them), the result is it looks too fake.

c. look for other examples, like Graffiti, and some works from Kimchi and Chips. However, open source doesn’t mean everything is open, the things I can get from the Internet are limited. Also, there should be other ways to build virtual light, but as far I only know a little about Processing. The third question is those products always update their version, so some former version I used cannot adapt to the new one.

Also, at the very beginning, I overestimated my ability…


Practice 6 – Spherical light on face (only with Processing)





I found a very beautiful spherical light example in OpenProcessing, then I made a little change and set a background picture in this coding. (I just download the woman’s picture from Google, I don’t know if it’s available?)

The code is short (only 40 lines) but create amazing results! How strong Processing language could be!

The key code is to create 2 light line: vertical and horizontal. It mimic sunrise and sunset.



Spherical light on face (Processing practice)

The video I made also shows how light change the face, though it’s away from my initial idea (3D selfie).

I tried to make the live video with the sunrise and sunset light effect, however, it runs slow and didn’t have good result as I wish. Maybe I need try another way.

This practice didn’t apply the Kinect sensor. At first I was thinking how to combine the Kinect, then I’m thinking if it’s not necessary, don’t make it. High technology is not everything. The most important part is still creativity. And I think if the audience experience the same thing as the practitioner want to give, it means the project is success.

Practice 5 – Kinect depthImage range limit



The cube is the virtual light area, where people can’t feel the light, but when he/she comes into that area, the virtual light will be shown in the screen.

The first step is to calculate the size of the area.

As I noted in the picture, the width is 500mm, no length and height limitation (actually it depends on the view of the Kinect).

The key codes is:

int[] depthValues = kinect.depthMap();
depthImage = kinect.depthImage();
for (int x = 0; x < 640; x++) {
for (int y = 0; y < 480; y++) {
int i = x + y * 640;
int currentDepthValue = depthValues[i];
if (currentDepthValue < 600 || currentDepthValue > 1100) {
depthImage.pixels[i] = 0;

The video is my test result, congratulations it works!

The final frame shown in the video is that I want to show the virtual light could be created in my hands.

Next step is to create light.

Inspiration from Aaudrey Penven’s photograph



When I look upon some useful information for my kinect practice, I found this project. I’m really shock.

Aaudrey Penven’s photograph: Dancing with invisible light.

It’s much alike my project and it’s sooooooo art!

“Dancing With Invisible Light is a series of interactions with the structured infrared light from a Kinect. It is an exploration of machine vision, mediated through layers of technology and time delay. We moved through darkness, illuminated by light invisible to human eyes, finding our way through touch and intuition.”


Practice 4 – Face detection with Processing

1. I applied OpenCV library in Processing.

2. I want to detect smile, and set the value of smile. Then the smile could change the video filters. Also, I found two interesting projects could give me inspiration to build the code.

Smile Shiffman

PSmile by Bryan Chung

However, the first one didn’t public its code, the second one contributed a smile detection library but only could be use in Processing Version 1.0.

And create new smile detection beyond my knowledge.

The other question is that people change emotion(smile) so frequently, how the camera filters change according to that speed? It will cause uncomfortable experience for visual sense.

3. Then I have a new idea: how about just detect face’s size? If people smile, their faces will be slightly bigger.

Face’s length detection is like an alternative way of smile detection.

With the help of OpenCV library, I create my own code, and the result looks like this video:

Face detection in Processing


Here’s some key codes:

Rectangle[] faces = opencv.detect();

for ( int i=0; i < faces.length; i++) {
println(faces[i].width + “,” + faces[i].height);
if (faces[i].width <100) {
} else if (faces[i].width >100 && faces[i].width <120) {
} else if (faces[i].width >120 && faces[i].width <150) {
filter(POSTERIZE, 4);
} else {


I used the built-in filters which Processing applied. But I think it looks not beautiful as I imaged. I want the Instagram filters effect, however it’s still a challenge for me.

Next step I will practice with Kinect.

Practice 3 – idea drafts






These 3 idea drafts all have possibilities to build my project.

In principle, like idea drafts No.1 shows, a Kinect sensor produces both 2D color video and depth images which means it could detect facial expression like smile and detect head-pose at the same time.

However, the Kinect’s depth information is not very accurate, which means it could not clearly identify the facial expression, especially this project ask for detecting clear smile level – how big mouth people will show when they smile or laugh.

Therefore,  No.2 appeared, it applies 2 computers: one runs Processing software in smile detection, the other runs the Kinect in skeleton tracking. However, because the two computers are independent to each other, these two functions are also independent without intersection.

Finally I will apply idea No.3, which combines the advantages of No.1 and No.2.

It still depends on later practice.

Practice 2 – Processing codes thinking

The idea is to apply smile level to control real-time video image filter effects, so the context of code needs:

1 Create several filter venues. For example, Instagram most popular filters: Valencia, Earlybird, Rise, Lo-Fi, etc; iPhone’s built-in filter camera: Chrome, Transfer, Instant, Fade, etc.

(5 of the most popular Instagram filters:


2 Face detection: smile. Calculate the smile level, for example, from -10 to 10: -10 – 0, 0 – 3, 3- 6, 6 – 10.

(There’s many code examples can be found in Github.)


3 If… Do… language.

If smile level = -10 – 0, do Lo-Fi;

If smile level = -10 – 0, do Valencia;

If smile level = -10 – 0, do Earlybird;

If smile level = -10 – 0, do Rise;

Else, Normal.


Practice 1 – some thinkings





Kinect + Processing


Kinect will use its DepthCamera to create virtual light in a given space (but I’m still confused with combine with what software, is it processing OK?) and use its SkeletonData to detect the head tilt, then the head tilt level will control the color and location of the virtual light.



There will be a webcam controlled by Processing software.

Processing will detect the smile, and the smile level will change the video image filters.


To conclude, Kinect detect head tilt, Processing detect smile, these two elements contribute to build a 3D selfie.


Idea changes with practices


Generally, we think an artist developed his project is from his first idea: when he has an idea, and then work hard and hard and hard and then make it come true! But, the truth is, the initial idea is always changed during the whole project processing, and sometimes the final result is totally different from the original thought.


Last month I was studying Processing (I do very recommend this book: Learning Processing by Daniel Shiffman, and I also like his examples involved in Processing’s library), recently I’m trying to do my summer project: Moving Selfie.

I research the original coding on facial expression. One of the coding examples is from Kyle McDonald’s ofxFaceTracker and Jason Saragih’s C++ facetracker libraries. The problem for me is that they (not only Kyle’s examples, as well as other artists) all use C++ language, however, I’m not familiar with C++ and I guess to start learning will take up a lot of time and it’s also a waste for me to learn Processing in the last month.

Then I was thinking if there’s possible to translate C++ language into Processing? My friend told me it’s hard, and even if it works the effect in Processing will be much lower than it in C++ environment.


About the project light on face which I need virtual light, but the virtual light in the 2D camera looks not that good. So I bought a Kinect  to consider to build 3D model to detect the facial expression and then give the virtual light effects. And I found Kinect is a very interesting device.

Also, recommend this book: Making Things See: 3D vision with Kinect, Processing, Arduino, and MakerBot.


About the virtual light I found a very good example by Elliot Woods.

I’m thinking apply this technique into my summer project.

       The pipeline is:

  1. Capture Depth at CameraXY (OpenNI)
  2. Convert to image of WorldXYZ
  3. Pade transformation to create WorldXYZ map in ProjectorXY
  4. Calculate NormalXYZ  map in ProjectorXY
  5. Guassian Blur X of NormalXYZ in ProjectorXY
  6. Guassian Blur Y of NormalXYZ in ProjectorXY
  7. Light calculations on NormalXYZ, WorldXYZ maps in ProjectorXY


I found a lot of examples and tutorials on Processing + Kinect, then I focus on the practices in the self-identity part. Idea changes with practice as this article’s title, if I cannot do the facial expression, I could still do other practices and still concentrate on the self-presentation, selfies.

I look these two examples: Ego Mirror and Body Dysmorphia.

Ego Mirror

Body Dysmorphia

(hah, sometimes it looks uncomfortable.)


The question is, at first I was very excited to see so many gorgeous projects via Processing + Kinect, very soon I found actually these projects look different, but in the essence, they are all the same. In other words, nothing new, actually.


The conclusion for me is,  the hard part always was, is, and will be: CREATIVITY! CREATIVITY! CREATIVITY!



En yeni muzikleri youtube mp3 indirin.
iddaa Siteleri Tipobet Mobilbahis bahis siteleri mobilbahis adresi