mac online apple blackjack http://www.euro-online.org

Processing Google Streetview

Coding is completely new to me, and trying to figure out Processing was like trying to learn a new language. This Google Streetview code took a little while for me to get my head around, but after what felt like hours of staring at what seemed like random sequences of symbols on the screen I finally got there.

So what was I trying to achieve? Tom set the CAP students a task of trying to get as close as possible to a location in Google Streetview, then create a 360 view of that location using Processing. My location? Area 51. I am not one of those people who are massively into aliens, space or anything else related to that particular theme. But I do like to know things that i am not supposed to, and the intrigue of a mysterious location which houses who knows how many secrets that we the general public will never find out is something that really stands out to me. I decided on this location to see how close Google can get me to those secrets.

The answer is, not very close.

I can get to the main highway that runs adjacent to the facility. I can get to several viewing points in the Nevada wastelands. There is no major security gate, there is miles and miles of land.

I chose one of the locations on the highway, closest to the turn off to the facility. This is where Processing left me stumped. I loaded the co-ordinates into the code, set it to how it should work, pressed run and came across a frozen still of the location. Something isn’t right, I thought. So I tried it again… And again… Hoping for better results. Madness, I know. Rather than being impatient, like the first few times, I waited, just to see whether or not the full 360 would eventually load.

It did.

Kind of.

Rather than having a full 360, I had a long section of one still, and then a half loop of the rest of the scene. Okay, so that’s progress.

After a little while of playing around, trying to see if I could make it work, I decided to try a different set of co-ordinates. One of the viewing stations, equidistant to my original set of co-ordinates. Cautiously optimistic, I loaded them into the code and this time it worked. I had a full 360 view of my location. After a little bit of investigating I found that the original co-ordinates did not supply all of the images needed to make the full 360 loop, which is why my first attempt was so unsuccessful. This put my mind to rest, it was not me failing to understand the code.

 

I will upload videos later showing the two different co-ordinates and their results.

Ghibli FanFiction Stage 2

susu

Working with Kinect in Processing (supervised by Tom Schofield), and he now finally can listen to my hand’s response. You rock! This is what I like about you ススワタリ, always hard working.

I will going to capture and send you to Pure Data in the next Stage. You will carry the X and Y coordinates for me, won’t you?  Trust me, you’ll be fine. All you have to do is to bang a Bang in Pure Data, then you can meet all your pals there in the boiler room. This time Kamaji won’t be there so don’t worry.

Alright. Good luck to both of us.

☹ + ♫ = ☺

music

My Demo ここ:https://www.youtube.com/watch?v=vc9u7_rLjFs

日本語を話したい話したいです。この曲をいつも歌いますよ。毎日会社でこの曲を聴く。恋人がいませんですが、ベイビーアイラブユーが大好きです。

My bedroom became a discotheque by running a song loop for the whole night with this test. Only one RGB led was found on the kit, so could not make a trivalent blinking try.

This little board opens a whole imaginary world to creative people for sure. Wiring is the most time consuming part maybe, but once we think of the result we are going to see, we will almost forget who we are. In fact, we are no-one in this world… So… shut the light off… It is time for some music with LEDs.

Web Scraping with Processing

After our workshop with Tom on Webscraping we were asked to go and try scraping some data ourselves. We looked at writing a scraper in Python, which I found a little hard to get my head around. As I have worked in Processing before it seemed logical to try and replicate a scraper using P5.

The following code can be inout into Processing and used to scrape the HTML data from a given URL and output a number based on the amount of lines scraped.

String lines[] = loadStrings(“http://clive-wright.co.uk/”);  //Input chosen URL here
void draw(){
println(“there are ” + lines.length + ” lines”); //This states “There are X lines”
for (int i = 0 ; i < lines.length; i++) { //following counts ‘i’ for each line scraped
  println(lines[i]); //Prints ‘i’ after counted
}
delay(1000); //Time delay for scraper to run
}

My idea for the use of this webscraper would be to be used on a live updated webpage, hence the addition of the timer (delay) at the bottom instead of a stop command.

Digital Media – Final Prototype

DIAGRAM

 Prototype the Structure

Several custom built hardwares and advanced computing techniques had been used to build the interface of this project. The physical structure consists of two basic modules which are, FTIR interface and Cymascope. Each module has almost the same body and when combined together, they create the entire installment. Additionally an Arduino microcontroller located inside of the FTIR module that controls most of the events happening inside the prototype.

The top surface of the FTIR module consists of 35 by 25 centimeter piece of 10mm thick acrylic glass. Also, the frame that holds the glass in place is covered with a custom made array of Infrared LED’s. These special LED’s emits only infrared light which is not visible to the human eye. A similar system could be built with normal LED’s however, it would cause serious problems such as, being affected by natural or ambient light, as I experienced in my previous prototypes.

To be able to put different materials on the glass, and provide a tactile input to the spectator, a complement surface had been made by using Tinkerman method. Complement surface consists of a waterproof silicon, which was applied to a plastic film by using a soft foam roller. Yet, this process creates a texture very similar to human skin and responsive to the FTIR surface.

The materials that create a texture on the FTIR surface are, mat (represents the distortion effect), bubble wrap (noise effect), soft towel (ambient echo), and table cloth made of synthetic cotton (delay effect). Pressure that spectators finger will apply on these materials, going to measured by a touch sensor located below the frame and going to set the effect level. As the spectator touches on the materials, relative effects will be triggered and applied on the ambient sound.

FTIR surface video can be found on this link: https://www.youtube.com/watch?v=D26N-rHWL_s

An Arduino microcontroller used to operate components of the prototype such as, LED’s and pressure sensor. Basically, it has three tasks to do. First of all, controlling the IR LED’s condition, whether they are switched on or off, to decrease power consumption. Since IR LED’s are 1.5 volts, an octocoupler used to separate two circuits that are working with different voltages. Secondly, controls the color of RGB LED’s to create unique color mixture on the liquid holder. Finally, gets data from pressure sensor which is placed under the acrylic frame.

Cymascope module of the installment consists of a black acrylic liquid holder, a speaker, sound transmitter tube, membrane and RGB LED’s. The final solution to transfer sound waves from the speaker to liquid is created by the influence of tonoscope. One end of the tube glued to the speaker and the other end enclosed by a flexible membrane. Thus, any interaction between the air inside and outside of the tube had been blocked. As a result, the motion created by up and down movement of the speaker can directly be transferred to the liquid holder. Ultimately, according to the vibration frequency different patterns can emerge on the liquid surface. Liquid holder has chosen to be black, in order to reduce reflections coming from other sources rather than LED’s. The reason why the RGB light sources located on the sides of the liquid holder is to make colorful patterns visible from all sides.

Cymascope video can be found on this link: https://www.youtube.com/watch?v=fs0oehisD8U

 

Thinking after the Consciousness Cube Exhibition

How to find the centre coordinate of the cube can be counted as the difficult aspect – the coordinate changes. Why the coordinate so important? Because it needs using that coordinate as reference when doing the explosion and spanning. But there’s also a simple way, to define the centre point’s coordinate in centre of the screen at the start.

For the future Consciousness shapes creation, may make the shape more randomly. Not only the colours of the shape changing randomly, but the shape itself. With that, it is expected to create a consciousness conception by the 3D computer graphic more vivid.

Codes with comments from Consciousness Cube

import processing.opengl.*;
float [][][][] pointList;
int cubeLength=600;
int num = 40;
float distance = cubeLength / num;
float cubeRotate=0;

void setup() {
size(800, 800, OPENGL);
//create a cube made by points, use array
pointList = new float [num][num][num][3];
frameRate(30);
pointList[0][0][0][0] = -300;
pointList[0][0][0][1] = -300;
pointList[0][0][0][2] = -800;

for (int i=0; i<num; i++) {
for (int j=0; j<num; j++) {
for (int t=0; t<num; t++) {
pointList[i][j][t][0] = pointList[0][0][0][0] + i * distance;
pointList[i][j][t][1] = pointList[0][0][0][1] + j * distance;
pointList[i][j][t][2] = pointList[0][0][0][2] + t * distance;
}
}
}
}

void draw() {
translate(width/2, height/2, 0);

//translate point of rotation on the z axis
//then we can rotate
translate(0,0,-500);
// rotateX(0.1*PI);
rotateY(cubeRotate);
//then here we translate back to the same point
translate(0,0,500);

background(0);
//make the colours of this cube changed, and control the speed of changing colours
if(frameCount%60==0){
stroke(random(255), random(255), random(255));
}
for (int i=0; i<num; i++) {
for (int j=0; j<num; j++) {
for (int t=0; t<num; t++) {
point(pointList[i][j][t][0], pointList[i][j][t][1], pointList[i][j][t][2]);
}
}
}

//this is a trigger use mousePressed, when press mouse button, points spill out with the explosion effect
if (mousePressed) {
for (int i=0; i<num; i++) {
for (int j=0; j<num; j++) {
for (int t=0; t<num; t++) {
//here instead of adding a speed lerp between this position and the position you want to go to (the centre of the screen for example) write over this position with the new position
pointList[i][j][t][0] = lerp(pointList[i][j][t][0], pointList[i][j][t][0] + i * distance * 10 – 3000, 0.05);
pointList[i][j][t][1] = lerp(pointList[i][j][t][1], pointList[i][j][t][1] + j * distance * 10 – 3000, 0.05);
pointList[i][j][t][2] = lerp(pointList[i][j][t][2], pointList[i][j][t][2] + t * distance * 10 – 3000, 0.05);
}
}
}
}
// release mouse button, cube back to normal, here re-draw the cube
else {
pointList[0][0][0][0] = -300;
pointList[0][0][0][1] = -300;
pointList[0][0][0][2] = -800;
for (int i=0; i<num; i++) {
for (int j=0; j<num; j++) {
for (int t=0; t<num; t++) {
pointList[i][j][t][0] = pointList[0][0][0][0] + i * distance;
pointList[i][j][t][1] = pointList[0][0][0][1] + j * distance;
pointList[i][j][t][2] = pointList[0][0][0][2] + t * distance;
}
}
}
}
//make cube spanning
cubeRotate+=0.01;
}

Spanning Sphere – Before the Consciousness Cube

Spanning sphere is the first idea of the exhibition before I changed my idea to the consciousness cube. Thought details between these two changed some but basic spanning idea and the idea of trigger effect kept.

This spanning sphere use blender to create a 3D model and loaded into processing with the form of *.OBJ. The following sketches are from the blender and processing. The texture effect looks changed a bit from the blender to processing, however the texture showed on blender is the texture effect that I wanted.

from Blender:

from Processing:

Consciousness Cube

This is a spanning cube made up by thousands of small points. The colour of this cube changed randomly per 60 frames. By mouse press on the showing area, all of these points will diffuse into a larger and further space.

The idea is around the theme of “Consciousness” and basically created a “consciousness shaped” 3D computer graphic. The conception of consciousness could be abstract and bodiless. That’s the reason using thousands of points to build up the cube instead of one single solid cube – as points group could be abstract and bodiless. In this case, each point symbolized one’s simple idea or feeling, and thousands coming together built up the consciousness system of individual. Cube spanned and changed colour all the time symbolized consciousness’s form varied. And the action of mouse press symbolized there’s no boundary of consciousness, and a small trigger can lead to the consciousness’s changing.

The sketch of this consciousness cube used array and “for” loop.

THE CONSCIOUSNESS CUBE

by mouse press…

Lookalikes processing assignment

A1 result

A1.

//the idea basically is using “for” for loops.
//calculate the radius of circle which can make it easier to fit the screen size
//draw both circles left and right at once (same y value)
//change x value and y value, loop 10 times in total

int xPos=25;
int yPos=25;

void setup(){
background(255);
size(500, 500);
fill(255, 0, 0);
stroke(0, 0, 255);
strokeWeight(3);
smooth();
}

void draw(){
for (int i = 0; i < 10; i++)
{
ellipse(xPos, yPos, 50, 50);
ellipse(500 – xPos, yPos, 50, 50);
xPos += 50;
yPos += 50;
}
}

A1 result

A2.

//draw the lines first with give out the x and y value
//then draw four large squares and draw four little squares last
//basic idea is the last drawing will on the top layer
//notice: fill colour part should write before drawing shapes

int xPos=125;
int yPos=125;

void setup(){
background(0);
size(500, 500);
smooth();
strokeWeight(2);
stroke(230,0,230);
line(0,0,500,500);
line(500,0,0,500);
}

void draw(){
fill(100);
rect(xPos, yPos, 75, 75);
rect(500-xPos-75, yPos, 75, 75);
rect(xPos, 500-yPos-75, 75, 75);
rect(500-xPos-75, 500-yPos-75, 75, 75);

fill(180);
rect(xPos+50, yPos+50, 25, 25);
rect(500-xPos-75, yPos+50, 25, 25);
rect(xPos+50, 500-yPos-75, 25, 25);
rect(500-xPos-75, 500-yPos-75, 25, 25);
}

A2 result

A3.

//in this drawing, it divides into 5 parts
//calculate the relative position between lines and circles
//draw first group of lines, using “for” for loop
//draw first group of circles, using “for” for loop
//draw second group of lines
//draw second group of circles
//draw the quadrilateral by giving the x y value of four angles

int xPos;
int yPos;
int yLine;

void setup(){
background(0, 0, 255);
size(500, 500);
stroke(255, 0, 0);
strokeWeight(1);
smooth();
fill(0, 255, 0);
}

void draw(){
for (int yLine=0; yLine<=100; yLine+=4){
line(0, yLine, width, yLine);
}
for (int xPos=0; xPos<=width; xPos+=50){
ellipse(xPos, 125, 50, 50);
}
for (int yLine=150; yLine<=250; yLine+=4){
line(0, yLine, width, yLine);
}

quad(100, 250+50, width-50, 250, 300, height-100, 50, height-100);

for (int xPos=0; xPos<=width; xPos+=50){
ellipse(xPos, height-50, 50, 50);
}
}

A3 result