HSS8123; Filming

| August 16th, 2017

*Finally got around to posting this now that the blog is back up and I’m not as swamped with the project itself.

Filming has finally been done. Given only a week in the sound studio to do what I needed, every hour counted, and it virtually became my home for that time.

The completed first set, and final models in place while filming the first “act.”

The second set, featuring the two characters Enki and Ada, as well as the scrapyard of all the failed test subjects, both in real life and in-universe.

Final set up for the second stage.

For the most part, though hectic and a little tedious at times, filming went rather smoothly. Another friend, an actual photographer, came in early during my week time limit and showed me some pointers on using the equipment I had on loan. As it turned out, my lighting was not idea, and this meant I needed to reshoot the first three scenes as their lighting quality was both bad and inconsistent with the rest of the film. Though some corners were skipped, and some scenes could have perhaps used more frames, I am at least content in knowing that I have plenty of material to actually make the film itself.

HSS8123; The Dog

| August 16th, 2017

*Finally got around to posting this now that the blog is back up and I’m not as swamped with the project itself.

My little clay man in his chair, Enki, is only one of two characters integral to the film I’m trying to make. The deuteragonist of the piece is Ada, the cybernetic dog created by Enki as a companion. If I thought the chair was bad, the damn dog was an even lower circle of hell, however. Unlike the chair, which only needed to have moving wheels, Ada requires more flexible movement akin to clay. He needed to be able to move his head, wag his tail, maybe even have movable legs if I was feeling especially ambitious (which, in the end, I wasn’t due to continued time constraints).

I mentioned the issue of scale in the previous post, and with Ada the dog it reared its ugly head again. In order to facilitate head movement, the plan was to incorporate a ball and socket joint into the dog’s model, allowing a fairly free range of movement in a number of directions. Obviously, this joint would have to be fairly small in comparison to the rest of the body, otherwise the proportions would be atrocious, or I’d have to scale the whole thing up to a ridiculous extent. This proved problematic for the printer, and it took many long hours of tinkering and tweaking to finally get a model it could print without horrible deformities.

The many failures of 3D printing, including some of the dog models.

An eventual success. By fixing the ball to the body, and using the joint as a face, it became much easier for the printer to process. The tail I found an elegant solution in a bit of twisted wire hooked around a small stub with a hole in it.

Next was giving the dog some texture or fur that still retained a metallic/cyber feel. This thread was one avenue, but didn’t quite work.

Another attempt at fur. A little too boisterous, and messy, however.

The idea to include fur on the dog was both a visual and technical one. Mostly it meant I wouldn’t need to include legs if the fur was thick enough, easing animation and allowing it to simply float along. As happy as I was with it, I simply couldn’t find anything with the right texture or size to properly act as fur, and so the idea was eventually scrapped.

HSS8123; More Models

| August 16th, 2017

*Finally got around to posting this now that the blog is back up and I’m not as swamped with the project itself.

It has been a grind, sometimes literally, but finally starting to get the models in place. Though I’ll likely only be using one, I’ve luckily been able to get some very basic clay sculpting lessons from a friend and have made a few extra tests to get a good feel for size, flexibility, and durability. many of these models are too small to properly use an armature, a vital part of the process. Indeed, many of the hurdles of this project have so far revolved around scale.

If there is one thing I am quite unhappy about with this final model is a lack of hubcaps on the wheels of the chair. I tried glue, a cleaner option, but the material used in the printing process is not very conductive to adhesion, and thus I’ve been forced to use clay as a means of keeping the wheels from falling off. Even that isn’t perfect, and they often have to be adjusted after a little rolling. They are, however, functional, which at this point I will happily take.

HSS8123; Sets and Models

| August 16th, 2017

*Finally got around to posting this now that the blog is back up and I’m not as swamped with the project itself.


Having ideas is one thing, making them physical is entirely another. Although I am happy with the ideas I’m currently running with, they have presented many of their own time consuming issues which need to be overcome. On a more positive note, over the past few weeks I’ve at the very least been able to gather most of the materials I’ll need for a set and been able to stitch plans together now knowing how much space I have to work with. The primary idea is to have two sets, one in white to represent the sterile environment of a laboratory while the other is to be of the outside, in a desert and scrapyard to be precise.

Playing around with arts and crafts to see what works.

Prop and environment making. Here we have a computer made from an old alarm clock. The surgery table is made from the back of the same clock, while the arms are wires, floss brushes, and coloured beads.

However, though set building may be going well, model making is not. In the previous post I talked about the prospect of making my main character wheelchair bound, alleviating many of the technical necessities normally part of making stop-motion. However, 3D printing such a chair has been proving to be a headache too; the printer does not work well with small parts, and any item needs to be of a certain scale for a successful print to take place. Getting the chair and accompanying axle and wheels to a size that isn’t too big for my sets or props, but which could also be printed to a decent standard so as to still fit together is proving to be very demanding. Given the speed of printing, this also means many hours are going down the drain.

Early model for a chair. At first glance this may seem perfectly functional, but getting axles and wheels to go with it at such a small scale is proving impossible.

A larger model. Better on scale, but an error near the end of printing meant the chair was not finished.

As exciting a technology as 3D printing is, it’s still very much taking its baby steps, at least as far as our printer is concerned. Beyond the often excruciating time needed to print anything, trying to print anything smaller than an inch is prone to failure. Sometimes this applies even to items on larger scales – the printer is prone to jamming and other errors, and when one such failure occurs the entire print needs to be started again. After, of course, a lengthy repair period.

HSS8123; Second Drafts

| July 11th, 2017

There’s no such thing as a final first draft. In just a few days and a handful of experiments, my ideas and projected course have changed quite substantially. This was to accommodate what I feel are is a better, more fun and enjoyable script while also smoothing out technical hurdles that could pose a serious threat to my time schedule and/or sanity.

These hurdles came in two flavours. The first was deformation, the second in keeping the damn models upright. As I quickly learned after opening up my packs of clay and throwing together some very quick, crude experimental models, dear old Plasticine isn’t quite sticky enough on its own to hold up even a small model, and is extremely difficult to keep any sort of consistency while making movements and actually animating. Legs grow stubby as they’re pushed down onto whatever surface you’re using, arms grow twisted and deformed, etc.

Mr. Prototype mk.I becoming Mr. Prototype mk.II after having a skeleton transplanted into him.

My first plan was to build on the 3D printing aspect of the project and build the characters, even those meant to be fully human and “fleshy” a skeleton I could wrap the clay around. The idea being that this would help keep consistency while also perhaps creating a better centre of gravity. It helped slightly with the former, not so much with the latter.

Luckily, professional help was right around the corner and I managed to finally get my hands on a copy of Stop Motion by Susannah Shaw. An excellent book going over all the basics and giving hints and advice for beginners. Browsing through it was an informative experience, even if not all the sections or techniques were relevant.  Surprise, surprise, the very two hurdles I encountered in my first few experiments were specific issues that needed addressed by all stop-motion animators. What I had tried to do with the skeleton is referred to as an armature, and is what actually allows a model to be flexible and thus animated.

As to keeping things upright, the usual solution would be to either use magnets and a steel “foundation” as part of the set, or to include bolts in the model’s feet which could be secured in holes drilled in the floor of the set; these holes would then be filled in once the model’s foot has passed over it to the next.

The magnet solution is both very costly and would be somewhat unwieldy in the space I have, as well as total overkill. The “tiedown” method, meanwhile, is a lot more technically advanced and far more time consuming; I do not have ample access to either resource. The subsequent workaround, which I am quite proud of, is to instead 3D print one character a wheelchair, and to change the other from a cyborg into a cyborg dog, the lower, wider frame hopefully providing much greater balance and stability.

These changes also demand a rewrite of the initial script, which on reflection is a good thing. I am prone of falling into a dangerous trap of trying to make things overly melancholy. More often than not it just comes off as pretentious, boring, and not all that interesting. I intend to wipe away all the waxing and wailing of what makes us human in an overly cryptic and annoying manner. Instead of an old, lonely scientist creating a humanoid companion ala Frankenstein, the new draft will be of an old, lonely scientist creating a simple pet, in a far more simple and wholesome tone.

For many months the question of “what to do for the final project?” loomed like an amorphous, indecisive cloud. For the first few months of the course the vague answer was to be some kind of video game, but as time went on I became far less confident with the avenue. I did not feel my coding skills were refined or tempered enough to really make anything of it without the use of pre-made tools such as RPGMaker and the like. Further to that no real concrete idea emerged as to what shape, genre, style, etc this non-descript game would take.

With my continued studies in Screenwriting, however, came an increased interest in film, and I began to side more with it as the clock ticked onward. Two questions remained: how I would make such a thing, and what would the actual film be about? Although I did dabble very, very briefly in digital animation, it was clear that, much like coding, I would probably need at least another year and a proper dedicated course of it to be able to pull off something worthwhile. Or rather, and more importantly, something I’d be happy with.

Stop-motion animation, though something I am still inexperienced in, looked a better street to take, having a slightly less intense skill floor and an equally extreme skill ceiling. This allowed me to continue playing around my primary passion of writing, in producing a script, but also a new found interest in 3D-modelling and printing. With 3D printing, the visual possibilities of any stop-motion project are vastly expanded.

I still, however, required a story and central theme. This began as something vaguely sci-fi. Inspiration came from the likes of Fallout, Deus Ex, and even Overwatch to an extent to focus on a theme of prosthetic (and possibly robotics) technology and their implications. This, I felt, could be wonderfully represented using a mixture of traditional clay/putty models and figures combined with 3D printed parts to represent artificial prosthetics, augmentations, and replacements. I didn’t have a story, but I did have something to work with.

Only, the thematic brainstorming didn’t end there. While casually researching the medium, it occurred to me that the use of clay in modelling holds a rather dignified spot in several creation myths, being used to mold mankind or other legendary creatures such as the golem. When combined with the use of 3D modelling to represent the “artificial” and “unnatural” state of man, it seemed particularly appropriate. Specifically, my research is focused on myths of Mesopotamia and such figures as the god Enki (or Marduk, depending on the specific version) and their involvement in the slaying of other gods and subsequent creating of man.

Eventually a story and script began to form, one brief enough to be doable while still having (hopefully, at least) a strong thematic base that would give me plenty to talk when it comes time to write the dreaded dissertation and all its academic necessities and formalities. A sort of retelling of these sort of creation myths, with an added sci-fi flair and perhaps one or two pieces of Frankenstein thrown in for good measure.

A script obviously isn’t enough, however. For stop-motion, a set and models would be required, and I have been gathering supplies for it:

A terrain mat, clay, and foamboard to be used for the set


Extra set pieces, such as shrubs and barbed wire. Usually used in model train sets

Research, at least in-depth research, is not something I am used to. As a writer primarily dealing in fiction, specifically fictional worlds and settings, a large bulk of my brain’s processing power needs to go to creation, with research being more a secondary objective used to smooth out the details and rope the setting closer to reality so as to make it believable.

Putting together a proposal for a real-life research project made for an interesting challenge then, one that required me to tweak how I actually view research as a whole. Naturally it ought be something I was interested in and passionate about, but at the same time I needed to switch off, or at the very least, mentally rearrange aspects of my imagination. Assumption was an easy trap I was all too aware of and afraid of falling into while brainstorming ideas.

The topic I eventually settled on was one close to my own interests, namely fiction and its consumption in modern culture. Originally the research was going to revolve purely around the idea that sequels, adaptions and reboots were overly abundant. The questions and many of the core ideas from that avenue still remain in the final proposal, but it was not quite enough to really make for an interesting thesis, at least in my eyes. It needed an additional edge to it, that could compliment and help expand the field as a whole. This was where ideas of escapism and how this might fit in with the idea of the sequel came in.

Escapism as a topic is, I feel, inescapably tied to fiction which in turn is inescapably tied to human culture. However, this began to raise additional questions on exactly what is meant by ‘escapism’ and whether the modern definition of the term is really applicable to most people as a whole. At first I assumed it might be, even with a little tweaking, but quickly came to the conclusion that this was another opportunity to expand on the research as a whole by exploring the ideals of escapism and what it actually means to different people. The author may truly be dead; I did not want to force my own ideas of escapism onto others and it would surely only make for self-fulfilling research to do so. Thus, the questions began to open up and it became imperative to properly nail down how people feel about the term.

Giving people free reign to define escapism themselves could be argued to essentially split the research proposal into two, one dealing with the nature of the sequel and its relation with consumers, and the other how people feel about escapism. This could still be argued even as the proposal enters its final forms, but as noted the two areas of sequel and escapism are always going to be linked to some extent. How strong that link is may well depend on the public, and it is that link that forms a core part of the research, as many of the questions intended to be asked can be turned on their heads. If someone does not feel they consume fiction as a form of escapism, why don’t they feel this way? Such open questions may be more difficult to process but in the end the data would be far more valuable to studying this area of society, and can of course be refined into a more useful state as a part of the thesis.

The research needed for the report on Seven Stories was much more practical in nature, since it would involve physical equipment that might not be immediately available to the center, and a more specific form of expression that would be appropriate for a wider audience. The proposal itself was still, however, based in the realms of fiction. This of course was not just because fiction is my personal domain of interest, but simply due to Seven Stories being a large proponent of it. My proposal at its core remained just that, story, with only the elements of its telling being the true ‘creative’ centerpieces.

Science fiction is a great love of mine and the genre I am most interested in. Seven Stories does feature exhibits inspired by the ‘Aliens Love Underpants’ series, and although the franchise is a clear hit and very popular with audiences, I wondered if maybe its potential engagement with the genre could be taken further. Although ‘Aliens Love Underpants’ does technically fall under the scope of the genre, it is more inspired by particular common elements of it, rather than an exploration of its possible principles. This served as inspiration for myself to create a more speculative, ‘realistic,’ educational piece that was still enjoyable and not too grim, harsh or gritty for younger audiences.

Automation was the perfect avenue for this, being a phenomenon that will effect the younger generations even harder than it has the older, while also allowing for interesting means of expression. Rather than having the traditional storyteller telling everyone to sit comfortably, it made far more sense to use screen and synthesized voices. These resources are readily available with just a quick google search, with plenty of software fitting the bill and several companies that rent projectors or screens for reasonable prices.

The rise of technology in the home was another natural boon that fit in nicely with the proposal. Rather than alienated or confusing children, the use of screens and robotic voices will come naturally to most of them. Indeed, simply walking around Seven Stories revealed a number of children with access to phones, tablets and or other similar devices. It seemed clear then that a story about automation would be relevant enough for them to understand while also being educational and pushing them to understand and work with technology, rather than being left behind and unable to function in the modern world.

All that, I am a little guilty to admit I did not use many of the lessons or resources provided by many of the guest speakers as part of the module. Although Dan Smith and his Yossarian site was an interesting take on the traditional search engine, it felt somewhat gimmicky and has so far – with a few little play sessions – been used as little more than a thesaurus when looking for synonyms or potential metaphors. Its use as an actual search engine capable of quickly and easily gathering resources as a part of research has been limited.

Similarly, the ice cube… performance? Meditation technique? was a little too far out there to be of much practical use to me in creating and developing my ideas and approaches to research or other areas. I can relate somewhat to the idea of different sensory reactions to the same situation, reflecting on the passage of time and so forth, but I feel the true meaning of what she was trying to get across was perhaps lost in translation.

Serena Korda’s work in picking up natural interplanetary signals was by far the most interesting on a personal level, but again on a practical level was maybe a little bit too niche. There is certainly some applications to be had in using the sounds and data (especially the use of wavelength and frequency, I’d suspect) in the use of music or perhaps as fun details to include as part of a bigger fictional setting. Of course, the scientific data itself could make for important material in ensuring realism and accuracy in any sort of sci-fi portrayal, this I won’t content, but the method of doing obtaining it – though again interesting and fun to engage with – is both somewhat impractical for the level of payoff.

DMS8013 Log#4; Stress

| May 29th, 2017

The third and final artifact created as part of DMS8013, “Stress” is a 3D model created using the rendering engine Blender, depicting an angry man punching a book out of sheer frustration at studying. It links in with the rest of the artifacts themes which cover similar topics and overall try to represent life from the perspective of those students that struggle.

Stress marked the second time I have properly used Blender for anything, and the results of its form a marked improvement over the first time. To use Blender, one generally has to select from a serious of basic, pre-defined meshes all modeled after basic geometric shapes; cubes, spheres, so on. In my first foray of Blender, I exclusively used these pre-defined shapes to create an extremely blocky effigy using only the program’s “object mode.” Object mode only allows for the creation, placement, and some very basic manipulation of these shapes, such as scale and orientation. It is, in essence, a macro view that allows the user to organize their render on a larger but unfocused level. As part of Stress, I intended to go further and delved into the use of Edit mode, which allows for the much more precise and refined manipulation of shapes and objects on a micro level.

The first use of Blender, 3D printed. Crude effigy.

Edit mode is key to creating any sort of actual, detailed render. The ability to so thoroughly shape and distort objects allows for far smoother and far more detailed creations. In the end, both projects were of a very similar thing – a person. The difference is in the detail and proportion however, with the latter being far more humanoid in appearance and far better proportions.

The second attempt at Blender. Actual limbs and proportion.

Like its predecessor, Stress was made using a variety of pre-made geometric shapes, primarily spheres and cylinders, that were then stretched, thinned and deformed into more limb-like shapes. Manipulation of vertices placement allowed for the illusion of joints such as elbows and knees, while depressions in the face allow for some very minor facial detail such as a mouth, nose, and eyes.

Stress separated into all its component parts. By zooming in, it is also possible to see some of the facial detail.

There were a few do-overs required in certain parts, however, especially the arms. Getting the scale and proportion for these limbs in particular was difficult, and there even came a moment when I opted to delete the entire limb and all work done on it in order to start again. As it turns out, each object, particularly spheres, have ‘central vertices,’ that can make a great difference to the ease in which certain manipulations are made possible. On the first try, the orientation of these vertices – facing vertically instead of horizontally – made stretching the limb into an actual arm incredibly awkward to achieve. Similarly, in order to flatten some sections out, rings of vertices faces needed to be painstakingly flattened into each other one by one, otherwise the object would appear too spherical.

Less a problem with Blender and more one with the accompanying technology was a matter of rendering and actual printing. In theory, an object with a high polygon count and lots of geometry would come out smoother and be far nicer to look at. However, such renders are also dangerously heavy on processing power of whatever machine is being made to actually render them. More than once I came close to crashing Blender, or even my entire PC, by experimenting with too much geometry. This issue extends beyond Blender; Cura, the software used to create files readable by our 3D printers, might have been able to handle the final, smooth object, by the printer itself could not, at least not at the scales I was aiming for.

The heavily deformed sludge monster made from using too small a scale print

Scale was a consistent issue. The 3D printer did not appreciate legs and feet very much, and trying to keep them in the final model results in half-melted stumps that the printer couldn’t actually build on without also creating a network of scaffolding. Likewise, with too small a scale came a loss of detail, smoothness, and overall shape as there was not enough space for the printer to properly – and neatly – build on.

3D printing aside, however, the digital model was at least a success by itself, and with some tweaking of scale and perhaps a higher quality printer, a much more accurate model would be entirely possible.

DMS8013 Log#3; Scared

| May 29th, 2017

The second, most time consuming of the artifacts produced as part of DMS8013. “Scared” is a short animation created using Pivot Animator (PA). Even if the software itself is technically fairly simple (at least in comparison to actual rendering engines) the very nature of animation is highly time consuming, and Scared almost certainly took the longest to make.

It tells the short story of a student trying to prepare for exams, only for them to suffer an anxiety attack. They reflect on their position, skills, and what they can possibly do to try and remedy the situation.

As mentioned, Pivot Animator is a simple and very user friendly piece of software. Characters are animated using “pivots,” essentially the joints of generally humanoid figures, though other types of characters can be made using a built-in figure builder. Fluid animation using the software is therefore, in some ways, similar to stop-motion, with minute changes made to limb, joint, and body positions, before a ‘picture’ is taken of the frame and added to the collective animation timeline.

The biggest drawback of this software is that it only really supports interaction with its own figures. Backgrounds and other effects, if any, need to be created using other software and imported in.

Cheap and dirty, an incredibly simple room/background made using Paint as part of the animation.

Since the story I had in mind required the character to interact, if only minimally, with their environment, such a background needed to be made. Although that particular work is not likely to win any awards on its own, it does however fit quite nicely into the low-quality and basic graphics of PA itself, and overall perform its function of providing a ‘physical’ space for the character to interact with. An added benefit is the meta implications; the entire piece is meant to showcase a low-skilled and struggling character, and the appearance of themselves and their environment reflect that.

I decry PA as a simple and low-power piece of software, but even the most basic of animation requires a degree of specialized knowledge to really let it flourish and look right. Specifically, I am referring to anatomy, and the anatomy of movement. Creating a convincing and good looking walk cycle is not as simple as simply moving the entire leg back and forth, it requires specific movement of the entire body. This was one such challenge encountered during the animation’s creating, with a good eighty frames ending up deleted due to a poor and wonky looking walk cycle. Its replacement is still far from perfect, with irregularities in speed and stride fairly evident as the character moves from one side of the room to the other.

An additional issue with PA is the lack of text creation. The story in question called for narration as the character laments their situation and themselves. Text cannot be added in PA directly, and although there are a number of solutions to this it did add considerably to the time needed to finish the animation, as well as the overall filesize. The easiest and most fluid to my purposes was to simply add it as part of the background, creating a different file for the addition of every letter to create a scrolling effect which could then be played again in reverse once the ‘thought’ was over and new text needed to be added.

An example of one of the backgrounds with text. A new background was needed for every addition of a letter to allow a scrolling effect.

In total, there Scared is comprised of five hundred and thirty six frames, plus one hundred and two backgrounds to account for just a few lines of dialogue.


DMS8013 Log#2; Study Buddy

| May 28th, 2017

The first and perhaps most difficult of the three artifacts asked for as part of DMS8013, was the ‘Study Buddy.’ A arduino device that was fairly simple on the outside, but the interior of which took some wrangling to get work.

Study Buddy, ver.1

The basic idea behind the Study Buddy was a small device that could be kept on a desk or bedside which would provide uplifiting and perhaps even helpful phrases to those struggling and stressing over exams, assessments, and general university life. This would be achieved with a simple interface and LCD screen linked to a pair of breadboards and arduino.

The first issue to overcome was a matter of flickering encountered on the LCD screen. As it turns out, rather than having a set contrast, the LCD instead dedicates one of its pins to controlling it. This pin, if simply provided with nothing but voltage, creates negative feedback that causes a distorted and constantly flickering screen. Usually this can be controlled using a potentiometer, however I simply wanted it set to maximum contrast at all times. There was no need in the design for it to be alterable. This problem was soon overcome through the use of a voltage divider gate, using a set of 10k and 220 resistors. This not only ensured the voltage and display was constant, but also stopped the feedback and eliminated any flicker.

Next, an interface was needed. In the original design I opted to use a potentiometer that would allow users to “point” to a green, yellow, or red LED to signify whether things were going good, fair, or badly respectively. By setting up an analog read with the potentiometer, and then mapping that result to one of three integers, it was possible to selectively make the LCD display different messages depending on which LED was being pointed at. The LED itself would, naturally, light up when pointed at to signify which choice had been made. The next step was to create a bank of responses which would be randomly chosen so as to not create a stale experience that would quickly lose its effectiveness. However, aspects of the interface designed bothered me. It was difficult to tell exactly where the potentiometer was being pointed, there was no default or ‘resting’ position, and overall it did not feel very intuitive to use. Testers would look at it, and dial the potentiometer too much, causing a smattering of messages that lost their effect when being so jumbled up.

Study Buddy, ver.2

Aside from changing up the orientation of the LCD in relation to the rest of the device, it now also uses a series of three buttons instead of a single potentiometer. These buttons do not require any sort of read feature in the code, but did instead need a larger array of integers to track whether they had been pushed or not, and to cycle through responses. These additional integers (what digital pin on the arduino they were connected to, the current switch state, previous switch state, and reply) form the bulk difference in code between versions 1 and 2 of the study buddy. Happy enough with the design, it was time to create an actual bank of responses.

The bank required use of the switch/case function which allows for a variety of different code to be executed depending on the pre-set conditions. In this case, when one of the three buttons is pressed. To actually randomise the result, the function of the same name would be used to select a random number, inserted that into the reply integer and pick a response based on that reply. Each button ended with three different responses, and though each one was longer than the 32 characters the LCD screen can display at any one time, through the use of delay functions and resetting the screen, it was possible to have a sort of scrolling message. After the message was done, the LCD would revert back to its default screen, asking how the user and presenting them with three options that lined up with the buttons and LEDs. Much like with the first version, the LED would light up when its respective button was pushed.

The responses themselves needed to be a mixture of encouragement and hopeful messages. It felt only proper to make the red and yellow (the ‘bad’ and ‘fair’ choices) the longest and most detailed, since it would be those that were struggling and stressing the most that would have the most use for the study buddy. However, praise for a job well done is also very important to encourage continued good work. It was for this reason that a ‘good’ response was included at all.