Subsquare. I make sounds that sometimes turn into music.

Old work-in-progress images from “Evoid Droid”

8th of August, 2012

Today I accidentally stumbled upon this neat app for OSX that looks through your e-mail for old, forgotten photos. I let it run during lunch and I was surprised to find a whole lot of sketches and stuff for demos I’ve worked on in the past.

In particular, I found a whole lot from the demo “Evoid Droid” which we made for the Xbox 360, as part of a sponsored Xbox 360 demo competition for Assembly 2007 in Helsinki. While that demo is not even close to being the most polished piece I’ve ever worked on, it was a fun prod to do, and the team was awesome.

So, without further ado, here’s a few select pics for those who might be interested in that sort of thing. For those who have seen the demo, it’s should at least show you a little how the scenes progressed.

 

Don’t use audiowarez

7th of August, 2012

Note: this was actually written in May, but for some reason it was never published, so — here goes:

Yeah, it’s soapbox time. After a nice discussion on DAW preference, I wanted to highlight some points I’ve tried to make in earlier posts on my blog, specifically those centered around using pirated software to make music.

In short: don’t.

The longer explaination as to why you shouldn’t use pirated audio software can be summarized like this:

  • It’s illegal. Ought to be obvious, but a shocking amount of people don’t know/care.
  • You start hoarding. You install “everything” you read about online, which leads to…
  • Your system will turn unstable. Yes it will. Bad cracks, malware (and loads of it) will turn your highly tuned audio-PC-monster into a sluggish 286 after a long night of partying. If you’re super-unlucky, you’ll also be hacked in some way or another. So much for “savings”.
  • You won’t learn anything. This is the most important point!

I’ll repeat the point here — you won’t learn anything by bathing in pirated plugins and softsynths. Why? Because you’ll just skip around, testing one plugin after another and never actually learning to know the plugin, what makes it tick, or even if it’s a good one to begin with.

Too much in the music production world is, unfortunately, about quick wins or “brands”. You see BT use this and that and think “OMG! That’s all I need to make music like BT!” — of course, this isn’t even close to being true, and everyone knows it, but self-delusion is a powerful force.

This is also the reason why today, in the days of Skrillex, that “Massive and FM8 = dubstep”. If I see one more “Make that signature Skrillex talking bass in Massive”-video on YouTube I’m going to vomit all over myself.

Therefore, instead of hoarding plugins and installing a gazillion softsynths, I recommend this alternative approach — it’s not littered with InstaMusic(tm) tips, but then again, that’s just the way it is:

  • Buy a legal copy of your favourite DAW and install it fresh. I like Reaper.
  • Check the bundled plugins, and IF you miss something — install just one of each “basic feature”-plugin. Yes, that means one compressor, one reverb, one delay, one EQ etc. This is to learn. You can expand later, but keep the count low.
  • Force yourself to use only those plugins. Learn all about them. Read the documentation!
  • Learn the built-in features of your DAW. They are better than you think.

The upside of this approach is that you’ll know your tools, which means that you’ll know what to do and when to do them! This means that you’ll be able to know exactly which plugins and methods to use later on, when you know all you need to know of the basics and want to upgrade.

End of rant. :)

    An interview

    23rd of February, 2012

    I was interviewed this week in the Norwegian edition of Computerworld Magazine about my involvement in the demoscene, and the Norwegian demoscene history exhibit “Pixlar” currently on display in Oslo. The interview is more focused on what employers should be looking for when recruiting.

    After I posted it on Facebook and Twitter, I’ve been repeatedly asked if there is an english translation somewhere — unfortunately there wasn’t, so I went ahead and translated it myself. I also took the liberty to add in a few details lost from the original interview and to the final text as published.

    The art of code
    by Kenneth Christensen

    Originally published in Computerworld on the 21st of February, 2012.
    Freely translated to English.

    Nobody challenges and explores technology like demoscene artists.

    Web5_62162a

    Illustration by Fairfax/Torkell Berntsen


    “The art corridor” at Oslo Central station in the capital of Norway has since January had a very special exhibit on display. The exhibit, named Pikslar (translates to “Pixels”) showcases examples of digital art and culture from the last 20 years, including retro computers like the Commodore Amiga 500.

    The exhibit shows passers-by a glimpse into a world unknown to most. The demoscene is a subculture for people with a higher than average interest in finding creative uses for computer technology, and utilizes their exceptional technical skills to push the limits of what’s possible.

    The real hackers

    Demosceners are driven by the desire to make the computer do amazing things, says Bent Stamnes.

    – Take the classic example of the Commodore 64. Turn it on, and it does nothing. It just sits there, waiting for you to tell it what to do.

    The blank screen and the blinking cursor was an irresistible invitation to a world of technical wizardry and skill for those receptible to the charm of the little breadbox. Stamnes is part of the team behind the Pikslar exhibit, and has been an active demoscener since he was 11 years old, making music by stitching together beep.exe commands in .BAT-files under DOS. Today Stamnes works as a manager for a software development company in the telecom industry.

    – I became a part of the demoscene in 1989, just as the Norwegian demoscene came into infancy. Now I’m 34 and I’m still active.

    The first group he was in was called MAD (Microchips After Dark, a nod to the Danish 80s rock-band D.A.D – Disneyland After Dark), but these days he admits to coding way less and taking on the role of producer of demos, in addition to making music for them.

    Self-taught enthusiasts

    Most of the people who take an active part in the demoscene have taught themselves the skills needed. The type of technical interrest typically found in demosceners is something in the blood, says Stamnes. – You either have it or you don’t. In a professional setting, he often looks for people who lives and breathes computer code.

    – I am an employer now, so knowing what to look for in people is really handy. It’s pretty easy to see if potential new employees are of the kind that just wants a job, any job, or someone who is born to code. That’s the difference between an enthusiast and a consultant.

    Demosceners often have a completely different perspective on problem solving than those in the IT-business just there to punch the clock and go home.

    – Demosceners are often exceptionally good at coming up with solutions in a macro perspective, something that’s both rare and and very important at times. These people have faced — and solved — completely unique problems. It is often a strong advantage to have such developers on your team, someone who dares to ask the right (and often unpopular) questions.

    Huge advantages

    If you’re looking for a job in creative businesses, such as advertising, film or game development, it’s often a strong advantage to have something to show, in addition to your regular resumé.

    – There’s basically no creative business recruiter who doesn’t know what the demoscene is or what talent comes out of it.

    To have a demo in your portfolion not only shows that you have the technical know-how, but also that you find creative ways to apply it through a passion for technology.

    – A geniune hacker has this glow, this natural instinct about technology. For them it’s almost unbearable not knowing how something works, and at the same time: optimizing it and making it better.

    The social aspects

    The demoscene is very social, and the cliché of the bespectacled nerd in his parents basement is completely incorrect. A common interest in technology, graphics and music combined with the desire to meet, share and aquire new knowledge, is far more descriptive of the culture.

    – Being social is very important. There are demoscene events almost every week somewhere in the world, often having people fly in from other countries just to attend for a few days. We talk, code, make demos and enjoy ourselves.

    Having likeminded people around you to share your demos with is the whole point of the subculture. Getting immediate (and often brutally honest) feedback is almost addictive. This group of people can easily separate the good from the bad in a heartbeat.

    – It’s a very knowledgeable and closely knit community. You learn a lot just by being around such creative people.

    The contacts and relationships forged in the demoscene lasts your entire life.

    – It’s very enriching. The contacts and relationships forged in the demoscene lasts your entire life. If you’ve competed head-to-head with your hardest competitior (which is often your best friends as well), you never forget it.

    A lot of demsoceners naturally end up in creative businesses, often in larger corporations such as game developers or advertising.

    A top-tip for recruiters: look for people with demoscene experience.

    Flattr this

    State of the demoscene: 1991 – 2011

    18th of January, 2012

    Update (23.01.2012): I’ve added stats on demo parties and my thoughts on the correlation between parties and releases.

    I’m quite actively involved in outreach efforts related to the demoscene. I speak at conferences and to companies and media outlets about the demoscene, its history, technical significance and what an amazing pool of talent it is.

    During almost all of my encounters with people not familiar with the scene I get asked “how big is the community?”, or “how many demos are there?”. Those things are quite easy to answer if you look at the releases at Scene.org or the statistics at Pouet.net, but the question asked by active sceners is different: “how long will it last”?

    There are clear changes in the way we consume content these days, and while it might have been completely reasonable and logical to download ZIP-files with executable data and run it on your Amiga 15 years ago, it is no longer the case. It is no secret that I too use YouTube to check out new demos if I don’t happen to be in front of my home (Windows) computer at the time. I use a Mac at work and a Mac laptop as my main machine at home as well. It is simply a matter of convenience.

    To get to the bottom of the actual state of the demoscene I had to look at the raw data. The data in question comes from Pouet.net, the most active demoscene portal and production database we have. I queried data back from the birth of the demoscene (1978) and up to and including 2011. In most cases however, I will focus on the last 20 years – from 1991 to 2011. 
     

    Let’s look at the numbers

    The first thing I did was naturally to look at the number of productions released over the period. I only chose the most popular platforms for this query, because after looking at the less popular platforms, the numbers were marginal and would have made little impact on the overall chart.

    This first chart which lists all demos, 4k and 64k intros is the most telling of all, so it’s best to just put it on front-street.

    Demoscene production totals

    Totalprods

    As you can see, the total number of prods (as defined above) has gone down from a peak of 2681 in 1991, through a slight second revival (with the rising popularity of DOS demos) of 2155 prods in 1996, down to just 768 last year.

    It would of course be unfair to compare the current scene with the very vibrant Amiga and C64-scene in 1991, so let’s pick the stable plateau from 2001 to 2006 as a baseline. By doing that we’re looking at a reduction of almost 41% in the span of just 5 years.

    Ouch! However: it has indeed happened before, in the first era of the scene, when the Commodore 64 lost most of its popularity in the mid-90s. I would also point out that there is indeed a leveling-effect, perhaps caused by the scene finding its core audience and authorship for the time being.

     

    Categories

    I wanted to dive into the numbers a little more and segmented the prods into different categories to see how the second most popular statement — “The 64k intro is dead” — stands up to facts. Again I chose only the most popular categories to avoid unnecessary noise:

    Categories

     

    Note: “demo” in the chart above also includes “invtros/invitations” which are also regular demos but created with a specific purpose; to invite people to demo parties.

    As you can see, “demo” is clearly still the most popular category, with 4k intros in second place (taking over for the 64k intro category as the runner up in 2004). With only 31 64k intros released last year, it does indeed look like that particular category is close to extinction. The 64k intros had a peak of 231 releases in 1997.

    When looking at its younger sibling, the 4k intros has sustained their popularity pretty well over the entire period, and is almost three times as popular as its big brother. But it doesn’t look good for the 4k category either, since it peaked in 1999 with 161 released prods, and only had 83 prods last year.

    However, the most alarming part of this chart is of course that there has been almost no positive growth in any of the most popular categories since 2006 (the only exception is the tiny peak of 4k intros in 2008, but that can be attributed to the NVScene demoparty in the US which encouraged the production of more 4k intros).

     

    Platforms

    Note: for clarifications on the different platforms mentioned, see the bottom of this post.

    After looking at the demos and categories I wanted to dive deeper into the platform divide to see how the releases were spread out across the different platforms. Again I chose only the most active platforms, but this time I looked all the way back to the beginning:

    Platforms_history

    Note: “web” in the chart above consists of both Flash and JS/WebGL/HTML5 demos.

    Now this chart is quite interesting because it not only cements the Commodore 64 as the undeniably most popular demoscene platform of all time, but also in that it documents a few historical (and highly debated) things, most notably the different platform handovers.

    The first, between the C64 and the Amiga, occurred in 1991. The second, between the Amiga and the PC (DOS), occurred in 1995, and the last major platform handover occurred in 2000 — when all three previous platform kings had to pass the flame to Windows.

    The Atari platform never rose to the heights of any of the platforms above, and its popularity peaked, like the Amiga, in 1992. There were 460 Atari prods that year.

    Windows has sustained a higher popularity over a longer time than any other platform, but it’s interesting to see that it never managed to peak quite as high as any of the three former champions. Also, as we already know, there was no new platform to take its place after the oddly similar 5-year periods between each previous peak.

    In terms of the current landscape, I took a closer look at the last 10 years and removed Windows as an option, to see what’s happening in the only segment with upward-pointing activity.

    Platforms_wo_windows

    Note: the slightly odd jump in activity on the DOS platform in 2006 is an anomaly caused by a 64b DOS intro competition with its 28 prods boosted the overall platform activity quite a bit.

    We can extrapolate quite a bit of information from this last graph, among other things: 

    • the Commodore 64 has doubled its popularity in the last two years. The C64 is also the only platform to actively sustain its popularity over a long, long time. 
    • The second success story is web demos (JavaScript, Flash, WebGL and others) which also has doubled its popularity in just one year.
    • Atari refuses to die. It actually bested its rival the Amiga in 2004, with 146 prods released, and has remained consistantly more popular since (with the exception of a small dip in 2009).

    The Amiga and DOS has also managed a slight upturn in the last year, while the rest (Linux and Mac) are pointing solidly into the ground, with only 30 and 15 prods released last year, respectively.


    Groups

    I wanted to dig a little deeper into this very sharp decline in overall demos released, particulary to see if I could spot some collaborating trends here. I did this by looking at the number of unique groups (as represented on Pouet.net) attached to the released prods over the years and the trend is pretty much the same:

    Groups

    The fact that fewer groups are releasing demos makes perfect sense when looking at the overall decline in activity on all of the three major categories – demos, 64k and 4k intros. If there had been a big discrepancy between the overall release numbers and the number of active groups, a logical conclusion would be that a few select groups were pushing out more prods and boosting the overalls. That, however, does not appear to be the case here.

    What we can read from this is that even though there is a decline in the amount of groups being active in the demoscene community, the overall decline in total productions released is stronger, meaning that each group is releasing fewer prods per year.


    The community

    I wanted to also take a look at the active community around the demsocene, and the best way to actually measure this was to look at unique user activity on Pouet.net:

    Users

    We can see from this chart that the activity level on Pouet hit its plateau in 2007/2008 and has remained more or less unchanged until last year where a decline of 18% is observable. It is tempting to link this to the rise in popularity of the Commodore 64 and that community’s lessened interest in active use of Pouet.net, but that is entirely speculation on my part.


    New: Demo parties

    After the original article was posted, I was told that I had forgot to bring the social aspects of demomaking into account — very true — so it was time to rectify that.

    A separate thing from the online community is the demo party — an actual event that takes place in a location that has power, a PA-system and a projector. Participants (often from a lot of different demo groups) create their productions and enter them into a competition. The audience usually decides the winners by public voting. Demoparty.net holds a database of most already held and upcoming demoparties, should you wish to visit one (you should!).

    First up is this chart over demo parties held per year, from the very beginning:

    Demoparties

    Note: there is a severe lack of data on the parties in the very beginning, so the first part of this graph is not entirely correct. That said, it should not make much of an overall difference.

    There is a long period from 1996 to 2004, almost a decade, where the amount of demo partes remained more or less constant. It peaked in 1999 with 112 events, and hit an “all-time” low (in new school times) in 2010 with 63 events. The interesting bit comes right at the end, where there is a bump up to 77 parties in 2011.

    These days most demos are released at some sort of demo party, so I wanted to see if there was any correlation between the amount of parties and the amount of released demos. This chart shows the same graph of demo parties, only magnified by 10 so as to bring it into scale with the other metric: the amount of released demoscene productions in the same period:

    Partiesandreleases

    Now that’s interesting indeed! We can clearly see that while in the original infancy of the demoscene, most productions were released outside of demo parties, the two lines finally establish a symbiotic relationship in 1999/2000, with each dataset following the other very closely. In fact, last year marked the first time in history where the relationship between the amount of demo parties and the amount of released demoscene productions match up.

    But what does this mean? Well, there are a few ways to interpret this data, and here is my take on it:

    • The scene is growing more social. Either this is due to the average age of active demosceners going up, or simply it’s a natural response to the consistently “virtual” lives we lead online.
    • Will more parties lead to more demos? Maybe, it would at least make sense to follow these two datasets in the future. One thing is that parties tends to follow releases, so there is perhaps also talk of a critical mass establishing itself.
    • The scary thought: what if 2011 was the tipping-point? Where we had more parties yet less releases? Is this the beginning of the end?

    ..and with those two things, we arrive at the end of this post, and the age-old statement…

     

    The scene is dead

    So, is it? No. But, it is changing dramatically — and at the same time, not at all. The one huge surprise for me while working with this data was the persistance of the C64 as a demo platfom. It is simply staggering that a machine that turns 30 years old this year is still such a favourite among demoscene enthusiasts. Perhaps not surprising, considering its extreme popularity back in the early 80s and the popular culture adoption of retro and 8-bit computing.

    Unlike a Fox News “journalist” I prefer not to dictacte what this data actually means, and will instead offer my personal opinions and thoughts on what you’ve seen above:

    • It’s clear to me that the demoscene needs to strengthen its online presence to stay visible and relevant
    • The C64 is “the little breadbox that could” – a clear fan favourite, and will remain so for decades to come. Update: Markku “Marq” Reunanen pointed out that a lot of C64 products were indeed added to the Pouet.net database from the CSDb database, which should count for some of the overrepresentation of that platform. However, it still reflects on the general popularity of the Commodore 64.
    • The trends we are seeing are not unique. If we compare to the indie game developer community, that too faced a sharp decline in activity and releases until the digital distribution system Steam started adding indie games to their catalogue as well as encouraging the indie game community to start adding their games through their Steamworks initiative. One could perhaps argue that the demoscene could use a distribution platform of its own?
    • The scene is splitting into two: one part that sticks with the retro machines, Pouet.net and other sites for old school-enthusiasts, and one part that will embark on new technologies like WebGL/Processing/VVVV and other, more presentable platforms. This second part will stay a minority for a period before totally outnumbering the other platforms… or dies trying.

    For those of us who love the demoscene and would like to see it thrive again, that leaves one single question: “What can we do?” — the answer is as simple as it is complex: “Make more demos!”. If you want to get started with demos, or need a kickstart to get back in the game, let me know and I’ll try my very best to point you in the right direction.

    PS: I’d like to thank Gargaj for contributing to this post with his SQL query mastery and not hitting me over the head with a blunt instrument every time I requested changes to the queries or new ones to be made. Thanks man.

    Additional information 

    I’ll end this post with a few clarifications on why the different platforms became popular (or not) in the first place, for readers who might not be entirely familiar with them:

    • C64 – raw unified hardware – every machine was identical, leaving only one way/method to create demos. Simplicity in its design made it an ideal platform for competition because it highlighted the talents of the programmers.
    • Amiga/Atari – mostly unified hardware – a few established routes to create demos, again highlighting the talents of the programmers, since their skills were the differentiating factors. The Amiga was more popular than the Atari mostly due to better hardware and features in the Amiga, especially in the OCS/ECS-age.
    • PC/DOS –  somewhat versatile in the ways you could set it up, not unified hardware (you could have a lot of different sound/graphics hardware), plenty of ways to make demos.
    • Windows – a whole bucketload of configuration options but incredibly tight abstraction layers (at least after a few years), solid OpenGL and D3D-support. Became the household standard OS for a whole generation.
    • Mac – essentially unified hardware, but wild changes between hardware revisions made it hard to code for in the early days. Basically has one method to create demos, but incredibly developer-unfriendly (for demo coders).
    • Web – lots of compatibility problems (“You need to use the nightly build of Chrome and manually set this toggle to get the demo to run”) and no one technology has really been set as the standard yet. However, it is clearly on the rise, driven by industry focus on things like WebGL.

    Flattr this

    Quick pitch-trick in Reaper

    16th of January, 2012

    There’s one thing in ACID that I missed in Reaper, but thanks to this little trick I can have it here as well: using the +/- keys on the numpad to pitch the selected piece of audio either up or down a seminote. This is an insanely quick and efficient way to tweak a take without messing about in context menus or “Clip Properties”.

    What we’ll do is to create a macro (or an “Action”, if you will) that binds the +/- keys to a function in Reaper that’s (unfortunately) usually a little buried. The end result will be that hitting either of those keys when you’ve marked a piece of audio will pitch it up or down but preserve the playback rate — meaning, the length will not be affected.

    If you do wish to change the playback rate as well, simply use Increase item rate by ~6% (one semitone) preserving length, clear ‘preserve pitch’ instead of Item properties: Pitch item up one semitone which I’ve used in the example below.

    This trick doesn’t require installation of add-on software, tweaking of system files or anything spooky at all. :) Here we go:

    1) Go to Actions > Show action list

    Action1

    2) Click New next to Custom actions

    Action2

    3) Under Filter, enter “pitch semitone” and the list below will show only items which includes that text

    Action3

    4) Drag the item Item properties: Pitch item up one semitone into the right panel and give the action a name — I use “+1 semitone”, then click Ok

    Action4

    5) Under Shortcuts for selected action click Add…

    Action5

    6) In the field next to Shortcut, click, then press the + key on your keyboard to record the keystroke. Check that the field now reads NumPad +, and click Ok

    Action6

    7) Now just go back to step 2 and repeat the process for Item properties: Pitch item down one semitone and attach that to the key on your keyboard and you’re done!

    Action7

    New remix released

    19th of November, 2011

    Update (16.01.2012): the vocal-version of the remix (my favourite) has been uploaded, check below to listen or buy at Beatport.

    I remixed a track by Miu for his EP release “We are the bass” a few months ago, and it’s now released. You can get it from the usual places: Beatport, iTunes and Juno Download.

    In terms of the remix itself, I think it works quite well. My original draft used the vocal tracks of the original, but very late in the process I was told they couldn’t be used and removed them.

    It turns out this wasn’t necessary so it bugs me a bit that it wasn’t released with them, as that’s how the remix was constructed, but whatever — the breakbeat groove works well on it’s own I hope.

    Miu feat. Zaiah’Man – We are the bass (Subsquare Breakbeat Remix – Vocal Version) by transistorbass

    FMX 2011

    8th of May, 2011

    Photo_2

    Yesterday I got home after three exciting days at FMX 2011 in Stuttgart, Germany. I have attended this conference since 2006, speaking about real-time graphics and the demoscene. FMX (or indeed “the 16th Conference on Animation, Effects, Games and Interactive Media” which is it’s full name) is a fantastic conference, and I urge anyone with the means and opportunity to visit it. The whole conference lasts for five days, and is also closely tied to the ITFS (an animated/short-film festival) which goes on at the same time.

    Sam_3093

    FMX is an interesting place to speak because the conference itself is cross-media (film, animation, technology and education) and divided into several sections: workshops, seminars, talks, exhibitions and a trade-floor with actually interesting exhibitors — not blood-thirsty sales people — which is always nice. Apart from the talks, I especially enjoy the exhibition “Into The Pixel” which is arranged by the Academy of Interactive Arts and Sciences and mirrored at FMX. Take a look at the 2010 selection of fantastic game art.

    This year, the conference had over 3.500 visitors every day, and a vast majority of them were students. I find that speaking for students can sometimes be challenging, but at FMX it is nearly always pleasant, because they really want to be there and are genuinely interested in what you have to say (and show). My session at FMX has always been very much about the visuals, because it is way more engaging for the audience to actually see what demos are, compared to the somewhat theoretical exercise of talking.

    Sam_3077

    After having a few years to “home in” on the perfect way to arrange my talk, I have landed on the following: a quick 10-minute introduction where I cover a few words about myself (“Why should we listen to you?”), what I represent (“Scene.org, archive, platform, the awards”), and what demos are (“Always hard to explain..”). After that, I quickly move on to showing demos, and I keep the interruptions between each production to the minimum (often just referencing what we’ve just seen, and what is coming up).

    You can see the slides I used this year here (PDF). If you’re interested in what I’m currently working on, there is a little hint in there as well. :) As usual, comments are always appreciated.

    In terms of what I show, I have a bit of creative freedom here, because I have to restrict myself to a maximum of 40 minutes (often a bit less than that) to fit the hour-long slot, and still leave room for technical glitches/questions/my own rants. There is no secret that every year I have personal favourites among the nominees for the Scene.org Awards from which I pick the productions, but I also have to balance it up to make a good show.

    I also have to keep in mind that between the demos I am going to tell the story of the different categories, and especially the size-optimized ones (64k and 4k intros) require a bit of a lead-in.

    This year at FMX, I showed the following (in this order):

    After the session I spent some time talking to students and others who were lingering in the hall after I was done and answered a number of questions about the demoscene, how to get started and such. It is always nice to see people interested in the scene, especially at such a production-oriented conference as FMX, where people usually have set their eyes on a career in film or games.

    I will most likely return to FMX in 2012 to talk more about the demoscene and show more cool demos. Until then, feel free to follow me on Twitter or just e-mail me.

      Behind the spheres on the plane

      29th of April, 2011

      Lightemission

      Before you start shouting Enough is enough! I have had it with these motherf&%$ng spheres on this motherf%$#ng plane! – I told you all I’d write this. :) As promised (what?!) I have gone through the demo with the coder, Mr Sverre Lunøe-Nielsen (also known as Hyde) and written a somewhat lengthy piece focused around our latest demo, “Spheres on a plane” (watch a video of it or download the executable), and the technical aspects that went into making it. I should state early on that if you’re not interested in the demo (or any demo) or the way they’re made, this entry might not be for you.

      Background

      Over the years, an big pile of links pointing to Vimeo.com and similar sites had been building up in the Skype logs between me and Sverre. We quickly realized two things: 1) that we like, more or less, the same sort of visual expressions in demos, and 2) that we never seemed to actually do anything remotely similar even though we wanted to.

      Therefore, it was almost with a sigh of relief that we canned our megalomanic dreams about making a compo killer demo for this year’s The Gathering, and instead concentrated on doing a demo consisting of a few simple, yet beautiful (at least we think so), scenes.

      This was around mid March. The picture that stayed on as the main reference was the following:

      Inspiration

      In what follows, Sverre will take over the keyboard (you know, like in oldschool scroller-style) and talk a bit about some of the steps we took, on the technology side of things, when making this demo – enjoy!

      Ambient occlusion

      In order to capture the visual richness of the reference image, the very first thing to get control of is the auto occlusion of the colletion of pyramids that make up the central object. For every point on the object, this information is then used to give it the correct shade of darkness. In computer graphics terms, this auto occlusion information is referred to as “ambient occlusion“, as it measures the amount of visibility of the environment from any given point of view on the object.

      In theory, to calculate ambient occlusion correctly for a given point on a given object, you can do the following: take a million rays, all starting at the given point and pointing in every imaginable direction in space. Then, for each ray, find out if there’s any intersection with the ray and the object. The ratio of number of rays that intersects by a million, is then the occlusion factor. For a point inside a sphere, this occlusion factor should be close to 1. For a point in the interior of a face of a cube, it should be close to 1/2, and for a point far far away from the occluding object, it should be close to 0.

      This approach would amount to something of a Monte Carlo integration method for determining the occlusion factor. Since we do not want to raycast a million rays in realtime, it’s better (in this case at least) to know what kind of integral we are trying to integrate and solve it in a different way. Of course, what the procedure above is doing for you is that it gives an approximation to the surface area covered by the occluding object after radially projecting it onto a unit sphere centered at the point of view (i.e. the origin of those million rays).

      Here’s what we did: Consider the reference image. The interesting object is built up of copies of the same building block, namely a pyramid. Take one such pyramid occluder, fix an orientation and position it so that its center of mass is at the origin. For a given point, p, outside the pyramid, radially project all the triangular faces of the pyramid facing towards the point p onto the unit sphere centered at p. Calculate the area of the projected point set. The area is then the occlusion factor for p with respect to the occluder pyramid. Notice that this works since the pyramid is a convex polyhedron.

      For an inconvex triangular mesh, two forward facing trianglular faces might have overlapping projections, and the correct occlusion factor with respect to these two triangles would be the sum of their projected areas minus the area of their intersection. Luckily, we can disregard this difficulty.

      We will not be getting into the formulas for calculating the area of the projected triangles. Suffice to say, it is an area integral with domain the union of the triangle faces visible from the given point of view. Originally, I was hoping that the integral had a nice exact and closed solution. But after having wolframalpha.com chew on it and fail a couple of times, I decided it was time to invoke some good old brute force precalculation.

      So, back to the pyramid.  We choose a bounding volume containing it and proceeded tocalculate the occlusion factor with respect to the pyramid for all points on a regular grid inside the volume. The results were put in a volume texture and saved offline.

      When the effect runs in realtime, we proceed in a way similar to deferred shading: first we draw the color of every copy of our pyrmid into a color render target. We then create a light buffer and populate it by simply placing some 20 point lights, randomly distributed, inside the view frustrum. These lights cast no shadows, so if we had stopped here and combined the light buffer with the color buffer, the object would appear with a somewhat interesting lighting but without any ambient occlusion.  

      Hence, before doing this combining, we do the following: for every pyramid in the object, we render a bounding mesh. For each pixel inside the bounding mesh which is on the object we are shading, we look up the occlusion factor from our precalculated volume texture and decrease the value of the corresponding texel in the lightbuffer accordingly.

      Doing this for one pyramid has the effect of causing that pyramid to “cast occlusion” to all nearby pyramids.  By doing this for all pyramids in the object, we arrive at the shading we are looking for.

      For the sake of self-ridicule, here’s the very first visual outcome of the having implemented the above algorithm (in delicious debug colors!):

      Pyramidofhate

      Notice the super cool patterns one the floor, close to the pyramid base, due to non-normalization of the precalculated occlusion values. After having massaged the code a bit, the rendering looked like this:

      Occlusionvolumeartifacts

      The remaining artifacts were now down to floating point imprecisions and texture resolution. To fix this, more tweaks were made to the precalculation code and finally, ta-daa:

      Occlusion

      It is worth noticing one very cool shadow effect in this final image that does not stem fromthe algorithm described here: Along the edges of each pyramid, we (well, Bent that is) added a nice shadow in the color texture of the mesh. It has absolutely nothing to do with the geometry of the mesh, but it is just as effective as any realtime ambient occlusion scheme. :)

      You can add more pyramids to the object by mouse-clicking on any pyramid face

      Notice also that even the pyramids have no relative motion with respect to eachother. This is a bit boring, since it does not show the real power of the ambient occlusion shading. The only place where there the ambient occlusion shading is affected by relative motion is on the ground plane of which the object hovers above – this looks really good. However, having a dynanic shading like this enabled us to put in an easter egg in the demo: you can add more pyramids to the object by mouse-clicking on any pyramid face. Run the demo and try for yourself.

      Finally, I have to add that the above description of our algorithm is a bit simplified. For example, I did not say a word about how to keep a pyramid from occluding itself. This, however, belongs to the realm of hacking and any hack works. More seriously, however, the algorithm as portrayed produces dead wrong results in many cases.

      Consider for example what happens if you place two pyramids base down on a plane, side-by-side, and look the area around the edge where they intersect. According to the algorithm above, the base faces of both pyramids would “get affected” by ambient occlusion, producing an “ambient occlusion halo” around the edge.. Juck. Luckily, this is easily mendable by including some simple visibility considerations. However, for this demo, time ran out and things looked OK as they were. But take a look at the image above on the lower right edge near the floor to see this pathology in action.

      Rendering path

      As I mentioned briefly already, we are using a deferred rendering path in this demo. The benefits of doing deferred rendering when shading local light phenomena is very comparable to the benefits of having a spatial hash of rigid objects when doing collision detection in a physics engine. The code also gets a lot more practicle to work with and the temptation to throw in a couple of lights here and there often becomes far greater than the urge to keep the framerate below a sober limit. The former is exciting, the latter is not. Creating the glowing spheres in the following two images is a direct result of this:

      LightemissionLightemission2

      In addition to standard use of textures, pointlights and the ambient occlusion technique already mentioned, there’s one part of the demo that uses raytracing. The very last scene (before the credits/greetings) displays a red ball and some reflective pyramids, using a specialized pixel shader for rendering reflections between convex objects. In any pixel shader that relies on raytracing, the crucial thing is always to speed up the calculation needed to find the intersection point between rays and the mesh we are shading.

      In our situation, we are (again) lucky enough to be dealing with convex polyhedra, essentially defined by a low number of trianglular faces.  Any convex polyhedron can be thought of as the intersection of a collection of half-spaces in Eucledian 3-space. For a cube, you would need six half-spaces and for our pyramid, we need five. Specifying a half-space can be done by specifying a plane together with a choice of normal direction. Thus, we can represent a pyramid as five planes with five chosen normal directions. If you think of a standard mesh with faces and normals, the faces give you the planes and the normals give you the normal directions.

      Representing our pyramid by five planes and corresponding normal directions, the problem of finding the intersection between a given ray and the pyramid boils down to doing at most five ray/plane-intersections and some bookkeeping.

      Thus, given a pixel on a pyramid, we can now do quite efficient reflection calculation between our pyramids to find out how any initial ray bounces around between our pyramids before it escapes into the environment. Originally implemented in CUDA, I was quite happy to finally have a reason for porting it to DX10 and put it into a release.

      Higherorderreflections

      There’s something to be said for the shortcomings of the reflection algorithm. Look for the sentence above were I wrote “essentially defined by a low number of trianglular faces”. Of course, the pyramids that we actually rasterize are more refined than the representation by five planes would indicate. The mesh we use has nice round corners and edges, while the convex objects that we actually raytrace has hard, sharp edges and corners.

      It would be more appropriate to say that for the raytracing, producing the higher order reflections, we are using a rougher approximation of the pyramid mesh than we use for rasterization and first order reflections. This is quite noticable when you are aware and start looking for it, and is a good reason why we decrease the intensity of reflected light inverse proportional to the distance travelled by the ray while bouncing between the pyramids. Just take a look at the following screenshot for an example of the sharper corners in the secondary reflections:

      Noticethesharpcornersinthesecondaryreflections

      At this point, I would like to make a comment related to this effect and to demoscene raytracing trends. Within the demoscene, lots of people has been concentrating on distance fields and produced some very interesting effects with them. However, I feel the focus has become too narrow. By this I mean that it somehow seems like people forget that distance fields are just another way of optimizing ray tracing.

      The important question is almost always: how do we, in the most efficient way, calculate the intersection between this ray and that object? It is not: how do I produce a distance field that encapsulates the geometric shape of that object? Sometimes, an answer to the latter question combined with the standard way of calculating the intersection between a ray and an object described by a distance field gives you the answer to the first question, but I think it would be healthy to keep the broader picture in mind.

      Physics (aargh!)

      Starting with the reference image, we always imagined having some physics code controlling the movements of our objects in the demo. For a long time, I had been doing GPU accelerated physics without really being able to produce something worth releasing. It was frustratingly difficult (for me at least) to create something that shows off the code and at the same time does not look like myFirstPhysicsSimulation.avi (just go to YouTube and search for “krakatoa” or “mograph” for plentiful examples). 

      Anyway, in the course of all this, I have been using Verlet integration for the simulation step. This is more or less by accident, having been seduced many years ago by the simplicity of rag doll simulations described by Jacobson. (I would like to take this opportunity to rant and complain about the incredible mess that is physics coding tutorials online. Even mathematicians get dizzy when seeing awful inertia tensor formulas, and least one I know tends to run to his fridge and seek comfort in beer every time he tries to read through one of these tutorials. Rant over.)

      Anyhow, for this demo we were not after world records in GPU accelerated physics, so I ripped out my code and made a nice cpu implementation of it. I must admit that it was done too hastily, and I am sure you can notice some phsyically questionable behaviour in the first scene of the demo :)

      The physics simulation in this demo runs in realtime. No big surprise, really. After all, we only simulate a handful of objects at any given time.  However, I did have to admit defeat in one scene and bake the simulation. The reason for this was that my code suffered from the fact that it wasn’t deterministic. This was never a big issue when I was doing gpu accelerated physics, as I was focusing on one thing and one thing only: simulating the maximum number of bodies in realtime. (In fact, the C++ class for my GPU rigid bodies had a name reflecting this: FUCRRS — Fast UnaCcurate Realtime RigidS. Yes, really.)

      So why was determinism important now all of a sudden? Because of the following scene:

      Indeterminism

      The way this part of the demo was supposed to be syncronized with the music was that every time one of these spheres would collide, a corresponding sound would be played in the soundtrack. Of course, having an indetermistic phsyics simulation is crucial if this should work.

      The way indeterminism played a role here is a bit complicated. The whole story involves three software timers trying to be in sync with each other using (finite precision) floating point numbers. Add to that the chaotic nature of rigid body simulations and imagine trying to fix this at 2 AM the night before our deadline. The choice fell quickly on precalculating the whole thing and be done with it. Of course, forgetting that I can’t code at all when being sleep deprived, I messed up the precalculation and still managed to screw up one more time before finally fixing it the morning after. Unfortunately, the end result still didn’t quite work, but the reason for that includes FRAPS (which sucks) and multiple other factors. Needless to say, sync-nazi Bent wasn’t happy, and it will be fixed. :)

      In 2010, I promised myself to never again try to do rigid body physics simulations. I probably have to fix a few things and release a final of this demo, but after having done that I will make the same promise to myself once more.

      Sounds

      Bent back at the keys (see – I told you about the scrolltext analogy up there in the beginning!) to talk a bit about the sound-design. I knew very early on that I didn’t want to have any melodies or detectable rythm in the soundtrack (or soundscape, I guess). It was also important to maintain a feeling of a large, emtpy space, since the visuals would be in part very intimate, but also reflect something very empty.

      For reference material, I went to freesound.org and downloaded a whole bunch of drones, blips and recordings of traffic and wind. In the demo archive, you can find a list of all the IDs and filenames so you are free to listen to them to see how the sound was built up. In addition to lots of ambience-samples, I used my favourite VSTi – Gladiator 2 – to generate some of the lower-end of the soundscapes. If you pay close attention, you may notice that the amount of low-end in the various parts reflects the amount of visual “weight” in the same scenes. For example, in the last scene of the demo, there is quite a bit of bass and chaos, building up to the end.

      I also played with conventions in the part with the falling spheres. In this part, I didn’t want to have sounds that fit the visuals, and ended up using four different breathing-samples (also from freesound.org) and mixed them together (including some pitching and time-stretching). The ping-pong-samples at the end of the scene was meant to play on the fact that the “force-field” had been switched off, and that the gravity (= normality) would reintroduce the sounds that the viewer was expecting. Still, the ping-pong sounds are still a bit “off”, seeing as the textures of the spheres are indicating something heavy and hard. Fun.

      In the part with the wooden pyramids hanging by a rope, the starting-point was a long sample of a person pulling a big rope back and forth over a metal railing (at least, that was the sample description). This sound is used throughout the part, but is first introduced when the first extra part is added to the object. For the evolution of the scene I needed to find some samples tha fit the object itself, and I was lucky to stumble upon a series of samples of drawers being opened and shut. The internet is a fantastic place. Various edits later, and the part worked well.

      Design

      The design (look, feel, motion, editing) was the easiest part of the whole demo. After having consumed more than my fair share of random motion graphics pieces from xplsv.tv (R.I.P) and Vimeo, quite a few design and editing conventions were quite clear, and most of the time actually went into deciding the order of the parts and tweaking the cuts (both with sound and timings).

      If you watch very closely, you’ll notice that the empty cuts between the parts are all of various lengths and there are various amounts of sound-spillover between them (example: the reverbs that end into blackness are sometimes very long (or “wet”, as we say), and in other times they are very short – almost instant (or “dry”, if you will). This is of course completely intentional. For example, in the opening shots where the scenes are empty, the cuts are shorter and the sound is dryer. This is because they are establishing shots, and the viewer does not need a lot of time to process the different parts. Later on, when there is something to focus on, the cuts are longer.

      Personally I think it’s quite confident to opt not to show off a complex raytracing-scene

      With regards to the camera-paths, I decided to stick with very simple moves. You can see the camera either dollying back, forth or to the sides. The only part with any sort of complex camera-move is the last part (the raytraced pyramids), and even though I didn’t really want to do it in the beginning, it works well in that part because it’s the last part of the demo, and the viewer is guided towards relalizing that the demo is about to end. The last part also went through a few versions before we settled on the one that’s in the demo now. The fact that nothing is happening (apart from the camera move) for 95% of the scene really makes it, even though it doesn’t show off the raytracing very well. Personally I think it’s quite confident to opt not to show off a complex raytracing-scene, but then again that was always what this demo was all about – minimalism and mood over technology showoffs.

      As usual, my weapon of choice for syncing the demo was the very excellent GNU Rocket System by Kusma (of Excess-fame) and Skrebbel. If you don’t use it to sync your demos — start now. It’s a life-saver, trust me. Luckily, Sverre had already implemented it in the demo engine, and seeing as I’m quite comfortable in using it (see Sunshine in a box, Regus Ademordna or Scyphozoa for references).

      I also worked more than a fair bit with Sverre on the textures, because getting the “right look” isn’t easy. The wooden pyramids went through at least five revisions before we settled on the final look, and the various “rooms” also took a lot of tweaking to get right. Most of the textures aren’t remarkably high-res (1024×1024), but I paid attention to sharpening cleverly before the final export – a neat trick I’d like to see more of elsewhere and that I’ll most definitely repeat in later projects as well. One final word on the textures – the floor in the part with the hanging spheres is a direct reference to two things: the classic “checkerboard” of Amiga-demos of the 90s, and “American McGee’s: Alice“, one of the best games I have ever played. The whole tone of the demo is directly related to the opening video sequence of that game.

      Final words

      We hope this post has been interesting and not too snobby. Apologies if any of us went off on tangents or became too hipster-like in our descriptions. That’s sometimes what happens when someone is asked to go back and analyze old thoughts and ideas.

      Until next time, thanks for reading!

      Flattr this

      The end of mixed feelings and a new demo

      25th of April, 2011

      Spheres_screenshot3

      Warning: this post begins with a bit of a long rant / history lesson, so if you just want to see the demo, skip down..

      Update (27.04.2011): I added a few paragraphs a bit further down to clarify some points.

      Last week was a pretty big demoscene week. I finished a prod for the competition at The Gathering 2011 and I also spent two days there with my wife (and one of them with my daughter as well, who charmed pretty much everyone on the hall with her antics). 

      Maja_tg11

      At this point I should mention, for those who might not know, that I used to be one of the main organizers of The Gathering (from 2000 until 2005, and I was a crew-member for many years before that). When I decided to stop doing that in 2005, it was not an easy decision – a big part of me wanted to carry on, but ultimately I decided against it, mostly for the following reasons:

      • The party no longer represented the people who were important to me – demosceners
      • A vast majority of the crew had little or no interest in the roots of the scene, returning to the party year after year just to meet friends and do nothing creative
      • I didn’t like who I became when I entered “full-on organizer-mode” – too much yelling and running around, to little creating – not fun, and not productive

      So I quit the TG organization and started focusing on two other areas instead: “my” (as in: I was one of the original founders and I’m currently the only remaining one) party – Solskogen, and Scene.org. I can safely say (and I’m sure others will agree) that this was the right choice indeed. Solskogen has since 2002 grown to become a must-attend event for almost the entire Norwegian (and scandinavian) demoscene, and all while not being a bitch to organize. The crew is fantastic, and our returning guests make the party what it is.

      My second demoscene “occupation” is with Scene.org – the largest demoscene archive in the world – where I involve myself with getting sponsors for the actual site (and it’s services) as well as doing outreach for the demoscene. The latter is a bit of a hot topic for some who believe that the scene should stay small and hidden. Naturally, I do not agree. :) This is why I travel to conferences pretty much all over the world (either on my own dime, or because I’ve been invited) to speak about the demoscene, real-time graphics and computer subculture.

      I dig doing both of those things, and it does leave me free to pursue other demoscene-related activities without completely burning through my spare time, but leaving The Gathering as an organizer wasn’t easy, and some small part of me always wished I hadn’t.

      Until last week. 

      The Gathering 2011

      When I went there last friday I became completely at peace with my decision 6 years ago, and here’s why:

      • I would only have held back the few people who longed to expand the creative areas of TG outside the conventions of the demoscene
      • Those people are today the most important people in the TG organization, because they have managed to do what I thought impossible: breathe new life into the creative areas of a huge, commercial “hybrid” computer event (I can’t call it a demo-party because it’s not, and I won’t call it a “LAN-party” because it’s so much more than that as well)
      • Had I stayed, I would only have burned myself out trying to accomplish something impossible, and I would have taken everyone else down with me

      So you see, The Gathering – a party I had attended since 1993 (I was 15) – will always have a special place in my heart. The Gathering 1993 changed my life, and The Gathering 2011 changed it again – in a different way. I must give mad props to the people of the (semi-awkwardly named) “Creativia”-crew: you guys (and girls) rock. I had a great time this year, and not because I placed 4th in the competition (I’ll get to that in a bit :), in fact – not because of that at all, but because the Creativia lounge, the professional stage shows, competitions and buzz at the event was “just right” this year. I dug it.

      Will I return to organize The Gathering? Nope, never, but it’s okay, because I don’t want to either. There are new kids in town, and they are doing a way better job of taking care of the creative heritage of The Gathering than I can. The good bet is to stick with them. Mixed feelings about TG? Not anymore.

      Update: What people seem to be forgetting is that these things are cyclical, they come and go. Which is, as explained above, why I left TG as an organizer. At the time, it was interpreted by some as laziness or giving up, but the simple truth is that I simply couldn’t see how I could contribute any more, and the best thing was to move out of the way and let someone else take over. After that, the remaining team tried some different things, but they appeared to me (as an outsider) to be attempts at rehashing the past and after a few years they tried something else, with a mix of new and old people. This last time however, it worked. Bigtime.

      At the head of every uprise there are eager and talented people pulling their weight to make it happen. In the past, that has included me, but now there are other people at the helm, and I couldn’t be happier. Again: congrats to the TG Creativia-crew – a fantastic collection of people, I’m impressed and forever grateful, because a positive creative experience at TG – the biggest event of it’s kind in Norway and almost in the world – has a spillover effect on everything around it, including Solskogen – my baby.

      Dsc_1795

      Spheres on a plane

      Oh, yes – the demo. It was about half a year ago that I spoke to a friend of mine whom I’ve done demos with in the past if we should perhaps team up again for a new production – a big one. The kind of demo that wins parties with twice the amount of votes as the second place. He was up for it, and we started scheming. However, due to various things that happen when you’re an adult, it became apparent that this demo would not be doable within the timeframe we were looking at (and because we were lacking a good 3D artists – a must-have if you’re aiming for the kind of show that we were in fact trying to pull off).

      We therefore went back to the drawing-board and decided on making a very peculiar piece of art, inspired by various motion graphics pieces found on Vimeo (and other places). Something weird and highly conceptual. Something that threw all the conventional demoscene “guidelines” out the window.

      What we came up with was this:

      [vimeo http://www.vimeo.com/22878274 w=640&h=336]

      Overall there is quite a bit of advanced code in this one, even though it might not look it. It is by no means a “throwaway-demo” based on “leftover effects” as at least one moronic person on the internet have described it. For example, the AO is particularily nice, and there are a fair amount of GI-trickery going on as well.

      There is also physics-based animation in almost every part of the demo, but only subtle, and not in a traditional “THIS IS PHYSICS! LOOK AT IT!”-way that demoscene productions tend to use. I’m going to write a follow-up piece that goes a bit more in-depth about the various tricks that went into making the demo. Look for it within a few days.

      In the meantime, you can download the demo and run it on your own computer (preferably one with a fast GPU (NVIDIA) and CPU – it also needs to run either Windows Vista or 7 – sorry, no XP-support).

      I am very happy with the way the demo turned out. I believe the best part of it was after the competition screening at the party, and someone (I can’t remember who, unfortunately) described it as “David Lynch-like” – whoever you were, thanks for that one.

      Flattr this

      The NVIDIA tech demo I worked on

      1st of February, 2011
      [youtube http://www.youtube.com/watch?v=SbSo7onX9qI?rel=0&hd=1]

      Last year I was involved with a project to make a tech demo for NVIDIA. Yesterday a video capture of it was finally released to the public, enabling me to talk about it.

      The demo was made to showcase the best of NVIDIAs technologies and was targeted towards the then new “Fermi”-architecture (now known as the GeForce 400-series of graphics cards). The tech demo was developed by Virtex, a Norwegian company formed by friends of mine whom are also well-known in the demoscene for making demoscene demos that are of the very highest quality. The demo was first shown used during the opening keynote at the 2010 GPU Technology Conference in San Jose, California.

      My role in the project was first and foremost to make the music for it, a process that was very fluid and tightly tied to the status of the visuals (which changed over time). If I wasn’t still bound by the NDA I would share a bit more about the process of making the demo, but you’re not really missing out – it was very demanding and made some of the smartest people I know to go “Huh, how do we do THAT?” more than once, which was fun –  watching very clever people be stumped is always a good thing, because then you know you’re really pushing it.

      For the tech-heads: the demo was developed in C++ using DirectX10 for rendering, and features physics simulations of rigid bodies and a 3D-version of the famous Koch Snowflake fractal. It uses CUDA, PhysX and is 3D-VISION enabled (you can use 3D glasses for a really immersive experience).

      A special nod goes to Einar Grønbekk (YouTube / Twitter) for being my session-guitarist on this project. At some point I hope to be able to release the soundtrack on my SoundCloud-page.

      Flattr this