mac online apple blackjack http://www.euro-online.org

HSS8121: Sound Art and Public Space 17-02-16

16th Feb we made solar sound modules using a hex-schmitt-inverter.

HSS8121: Flip Flop Circuit 17-02-09

On the 9th of March we made flip flop circuits. I first did this in the 70’s using plans taken from the Woolworth’s published Project Book 057 – Make Your Own Computer.

Book 159 – Experiments in Sound may also be useful in the context of #mediaarcheology.

HSS8121: proposal

Why I want to research the transformation from digital cameras into film like?
As a film user myself, I found it very fascinating and convenient to use films as I don't need to adjust the colour too much and each film has its unique colour as well.
However, many friends of mine are using digital cameras and they often ask me to change them into film's appearance. That is really a task for me as I can't make the digital images look exactly like the films. Then I started to research whether or how can I simulate the 35mm films appearance completely.

Is film dying?
I don't think so. Although in the 2010s, many film companies have announced that they will stop making films. Surprisingly, in 2017, Kodak announced that they will resume the production of a very famous film Ektachrome and other companies for example Italian company called Ferrania also decided to resume the production. According to Ilford’s sales figures, the recovery has been going on for some time, and in the past three years, sales have risen by more than 5% a year.
So people are still loving with the films and it should be a good research proposal for researching the films.

In what ways?
Algorithms, histograms, manual editing.
These are the 3 ways I figured out first. According to E. Reinhard, M. Adhikhmin, B. Gooch and P. Shirley(2001), they used simple statistical analysis to impose one image's colour r characteristics on another. They succeeded to achieve colour correction by choosing an appropriate source image and apply its characteristic to another image. They proposed an idea that firstly transform the colour space from the RGB to LMS, then convert the result into a new colour space lαβ, which is a transform of the LMS. Then the colour-correction method operates in the lαβ space as that will allow them to treat the colours in different channels separately. Finally transform the lαβ space back to the RGB space to display. Finally, applying the colour to the target image.
I was inspired by this and decided to use the histograms to test.

Results
I believe that the digital cameras can't 100% simulate films' appearance as the imaging of films and electronic sensors are different. So they will perform different and film performs better in higntligh areas.

HHS8121 Seven Stories/Late Shows followup

Whew. It’s been quite a month. I’ve just about recovered from the barrage of coursework (although there are still a few little deadlines to mop up), enough to get my hands on all the photos and videos of the Seven Stories/Late Shows event, which went really well!
The pop-up performances worked better than I could have hoped for, really suited the venue and feel of the event. The culture-crawl premise of the Late Shows and the way the venue encourages you to wander around an explore really lent themselves to creating a sense of discovery as people got caught up in performances without necessarily planning to even see them.

I can’t take all the credit for the night, of course. Mike had a really interesting setup printing chapbooks in the studio, the performers’ talent was all their own, and credit for most of the documentation goes to Garry and Jade. But there is a certain amount of satisfaction in seeing that parts of the programme that I organised go off (mostly) without a hitch. I remember someone telling me that the best way to succeed in the arts is to team up with people more talented than yourself, and there’s definitely some truth there (-:

I had steeled myself for a lot of logistical difficulties, as well as the challenges of mediating between a venue and a lineup of potential performers with different needs, but everything went remarkably smoothly (although it certainly helped that I was mentally prepared). Didn’t stop me from being stressed out on the night. My own performance was… passable, in my own opinion. I could probably have pulled off a better set if my focus hadn’t been on coordinating everything else. But it was a lovely crowd, and a fun set nonetheless.

And now, a long overdue shout-out to the great local performers who came along, and went the extra mile, tailoring their sets to the themes of the exhibits and the venue. Follow the links below the check out their various pages and a sample of their work:

Doug Garry – old friend from the Edinburgh poetry scene, and one of my favourite spoken word performers. Also one of the Loud Poets.
Jayne Dent – amazingly talented solo singer, who also studies fine art here at Newcastle University.
Rowan McCabe – the Door-to-Door Poet. He does exactly what the name suggests, and he’s great at it.
Rosie Calvert & Will Finn – half (two quarters?) of acapella folk quartet the Teacups, and unfairly talented musicians in their own right.

That’s it from me for now. Expect some further screenwriting grumblings (its back to the drawing board again for me), followed by (hopefully) some new projects! In the meantime, here are some tasty video clips from the event: https://www.youtube.com/playlist?list=PLJNHS2Lo-SiSS99CJTsedQi-4vT8AOAEQ

 

HSS8121: Research Proposal – ‘Taste’ and Social Media

I was initially under the impression that the Research Proposal could be used to plan for the Creative Practice Project module, which I was very happy about. Obviously I see that these are different modules and should be kept separate, but I had my hopes up after reading the uni guidelines on Proposals:

‘If you are studying for a Master’s degree, you will almost certainly be required to write a dissertation proposal, which may be formally assessed as part of a research methods module.’

On top of this it just seemed logical to use this for the CPP, as most people will have decided on topics that they are very invested in which in turn could have produced some more passionate proposals/presentations. However, I do appreciate this as a ‘practice run’ for planning the research involved in CPP, and despite my initial annoyance, I came to quite enjoy writing this proposal.

As someone who spends a (worryingly) large amount of time discussing music taste online, I’m surprised I haven’t written on the topic before. Discovering the academic roots of something I’ve often pondered was fascinating and eye-opening. The main problem within this area was finding my own niche that would be suitable for research. As I didn’t have a specific idea in mind when starting, I decided put my faith in unearthing something during my research; which I did. The talk of ‘social network theory’ really gripped me, and an obvious link is the more commonly used context of ‘social network’, pertaining to social media websites. As I mentioned previously, I engage in discussion on music tastes very frequently online, and often this is with complete strangers. In a way I am already engaging in the research I proposed (subconsciously), and have been for a long time.

I can’t say whether I will be pursuing this topic with any intent in the near future (the Creative Practice Project kinda gets in the way of that). However, I think assessing the topic will make me more conscious about how I’m engaging with people online and how social ties and online communities effect my music taste (and maybe how I effect the taste of others).

DMS8013 Log#4; Stress

The third and final artifact created as part of DMS8013, “Stress” is a 3D model created using the rendering engine Blender, depicting an angry man punching a book out of sheer frustration at studying. It links in with the rest of the artifacts themes which cover similar topics and overall try to represent life from the perspective of those students that struggle.

Stress marked the second time I have properly used Blender for anything, and the results of its form a marked improvement over the first time. To use Blender, one generally has to select from a serious of basic, pre-defined meshes all modeled after basic geometric shapes; cubes, spheres, so on. In my first foray of Blender, I exclusively used these pre-defined shapes to create an extremely blocky effigy using only the program’s “object mode.” Object mode only allows for the creation, placement, and some very basic manipulation of these shapes, such as scale and orientation. It is, in essence, a macro view that allows the user to organize their render on a larger but unfocused level. As part of Stress, I intended to go further and delved into the use of Edit mode, which allows for the much more precise and refined manipulation of shapes and objects on a micro level.

The first use of Blender, 3D printed. Crude effigy.

Edit mode is key to creating any sort of actual, detailed render. The ability to so thoroughly shape and distort objects allows for far smoother and far more detailed creations. In the end, both projects were of a very similar thing – a person. The difference is in the detail and proportion however, with the latter being far more humanoid in appearance and far better proportions.

The second attempt at Blender. Actual limbs and proportion.

Like its predecessor, Stress was made using a variety of pre-made geometric shapes, primarily spheres and cylinders, that were then stretched, thinned and deformed into more limb-like shapes. Manipulation of vertices placement allowed for the illusion of joints such as elbows and knees, while depressions in the face allow for some very minor facial detail such as a mouth, nose, and eyes.

Stress separated into all its component parts. By zooming in, it is also possible to see some of the facial detail.

There were a few do-overs required in certain parts, however, especially the arms. Getting the scale and proportion for these limbs in particular was difficult, and there even came a moment when I opted to delete the entire limb and all work done on it in order to start again. As it turns out, each object, particularly spheres, have ‘central vertices,’ that can make a great difference to the ease in which certain manipulations are made possible. On the first try, the orientation of these vertices – facing vertically instead of horizontally – made stretching the limb into an actual arm incredibly awkward to achieve. Similarly, in order to flatten some sections out, rings of vertices faces needed to be painstakingly flattened into each other one by one, otherwise the object would appear too spherical.

Less a problem with Blender and more one with the accompanying technology was a matter of rendering and actual printing. In theory, an object with a high polygon count and lots of geometry would come out smoother and be far nicer to look at. However, such renders are also dangerously heavy on processing power of whatever machine is being made to actually render them. More than once I came close to crashing Blender, or even my entire PC, by experimenting with too much geometry. This issue extends beyond Blender; Cura, the software used to create files readable by our 3D printers, might have been able to handle the final, smooth object, by the printer itself could not, at least not at the scales I was aiming for.

The heavily deformed sludge monster made from using too small a scale print

Scale was a consistent issue. The 3D printer did not appreciate legs and feet very much, and trying to keep them in the final model results in half-melted stumps that the printer couldn’t actually build on without also creating a network of scaffolding. Likewise, with too small a scale came a loss of detail, smoothness, and overall shape as there was not enough space for the printer to properly – and neatly – build on.

3D printing aside, however, the digital model was at least a success by itself, and with some tweaking of scale and perhaps a higher quality printer, a much more accurate model would be entirely possible.

Seven Stories, Bears, Constellations and Stress

I found the Seven Stories brief alarmingly challenging, considering I had expected it to be one of the more ‘fun’ assignments. However, being so far outside of my comfort zone proved to be a bit of a problem when developing the initial idea for my response. This is maybe due to my tendency to overthink everything but I won’t get into that…

When starting this assignment I had that all too familiar feeling of being similarly happy and overwhelmed by the freedom given. Initially I took to ‘Yossarian’ (the lateral search engine introduced in one of our final lectures) to get some thoughts going. I kept it simple, using works from the program themes for May 2017; namely bears and aliens. In all honesty this didn’t get any sparks flying and turned out to not be very useful. I can’t actually remember where my final idea of using the ‘Ursa Minor’ – ‘Little Bear’ – constellation came from (I think it was somewhat of a ‘lightbulb moment’).
When researching for this idea, I constantly battled self-doubt and kept on abandoning for short periods of time to try and think of something else before returning. My favourite of these other ideas was a giant teddy bear sampler. However, not wanting to have wasted so much time researching my other idea I eventually convinced myself that it was/is a good idea. I realise this sounds like I ‘settled’ for something I wasn’t satisfied with, but I do have faith in the idea and it provided much more depth of discussion than any other ideas would have.

Despite not being overly-enthused with this assignment, I do feel like it was a good exercise to do. I certainly feel that in future situations such as this I would be more prepared to respond in a professional manner, and of course outside University I would only respond to something I am personally passionate about (meaning ‘ideas’ would be less of an issue). In a more general sense, engaging with topics related to Seven Stories has given me more of an appreciation for children’s literature and entertainment. Having attended two ‘story times’ now (once on our day visit and once at the late shows), I am very impressed not only by the artists’ engagement with children, but with all age groups.

HSS8121: Media Archeology

After reading Wolfgang Ernst’s Sonic Time Machines a process of practical engagement was undertaken to better understand the underlying methodology. This coincided with preparation for an exhibition at Hoults Yard. The work to be shown required the use of a number of vintage radios. These needed to be cleaned and mended in advance of the exhibition. The process of dismantling and cleaning was documented. Photographs can be seen by viewing the links which follow:

DMS8013: Research – The relationship between pitch and frequency

Pitch is an auditory sensation in which a listener assigns musical tones to relative positions on a musical scale based primarily on their perception of the frequency of vibration. Pitch is closely related to frequency, but the two are not equivalent. Frequency is an objective, scientific attribute that can be measured. Pitch is each person’s subjective perception of a sound wave, which cannot be directly measured. However, this does not necessarily mean that most people won’t agree on which notes are higher and lower.

Sound waves themselves do not have pitch, but their oscillations can be measured to obtain a frequency. It takes a sentient mind to map the internal quality of pitch. However, pitches are usually associated with, and thus quantified as frequencies in cycles per second, or hertz, by comparing sounds with pure tones, which have periodic, sinusoidal waveforms. Complex and aperiodic sound waves can often be assigned a pitch by this method.

According to the American National Standards Institute, pitch is the auditory attribute of sound according to which sounds can be ordered on a scale from low to high. Since pitch is such a close proxy for frequency, it is almost entirely determined by how quickly the sound wave is making the air vibrate and has almost nothing to do with the intensity, or amplitude, of the wave. That is, “high” pitch means very rapid oscillation, and “low” pitch corresponds to slower oscillation. Despite that, the idiom relating vertical height to sound pitch is shared by most languages. At least in English, it is just one of many deep conceptual metaphors that involve up/down. The exact etymological history of the musical sense of high and low pitch is still unclear. There is evidence that humans do actually perceive that the source of a sound is slightly higher or lower in vertical space when the sound frequency is increased or reduced.

A440 or A4 (also known as the Stuttgart pitch), which has a frequency of 440 Hz, is the musical note A above middle C and serves as a general tuning standard for musical pitch.

Prior to the standardization on 440 Hz, many countries and organizations followed the French standard since the 1860s of 435 Hz, which had also been the Austrian government’s 1885 recommendation. Johann Heinrich Scheibler recommended A440 as a standard in 1834 after inventing the “tonometer” to measure pitch, and it was approved by the German Natural History Society the same year. The American music industry reached an informal standard of 440 Hz in 1926, and some began using it in instrument manufacturing. In 1936 the American Standards Association recommended that the A above middle C be tuned to 440 Hz. This standard was taken up by the International Organization for Standardization in 1955 (reaffirmed by them in 1975) as ISO 16. Although not universally accepted, since then it has served as the audio frequency reference for the calibration of acoustic equipment and the tuning of pianos, violins, and other musical instruments.

For the electronic colour organ, it will based on the frequency of middle C.

 

 

 

 

 

https://en.wikipedia.org/wiki/Pitch_%28music%29#Pitch_and_frequency

https://en.wikipedia.org/wiki/A440_(pitch_standard)

DMS8013: Research – correspondence media

The most outstanding design of colour-sound concept based on pursue the harmony of human sense was American inventor Bainbridge Bishop’s colour organ. In 1877, Bishop got a patent for his first Color Organ. Basically, the instruments were lighted attachments designed for pipe organs that could project colored lights onto a screen in synchronization with musical performance. He claimed that simple colour did not give the sensation of a musical tone, but blended coloured light did so. Bishop’s colour organ is unique because of it was a representation of break through the immanrnt concept and involved in correspondence technology. Even though it wasn’t perfect because of the limitation of the technology at that time.

 

“Chords were shown properly, the lower bass spreading over the whole as a ground or foil for the other colours or chords of colour, and all furnishing beautiful and harmonious effects in combination with the music.” (Bishop, 1893)

 

“The natural harmonic chord of light, as illustrated by the rainbow, shows red as its fundamental or keynote; for this reason I think we should take C, the key-note of the natural scale. It will be observed that its dominant is greenish-blue, its subdominant yellow-green. The greens of nature seem to make up combinations and masses of greens inclining to these two hues. A pure crude green seems to be out of place in a landscape, and, if seen, it generally produces a harsh and discordant effect.” (Bishop, 1893)

 

I was thinking if I can make a device based on the concept of Bishop’s colour organ by using open source technologies.

 

 

 

 

 

https://en.wikipedia.org/wiki/Color_organ

Toward 21st Century Wundermaschinen – A Practice-based Inquiry Developing Media Archaeology as an Artistic Methodology