Moving on

This blog was created for the purpose of documenting my postgraduate research. It was an exciting couple of years but off into the industry I go. If you delve into the pages of musings, ideas, inspirations, ramblings, you will find many questions and trains of thought perhaps left unanswered. I do not plan to update here again, but perhaps some answers can be found on my website, which will be kept up to date with my latests projects and adventures.

– Judit

A Piece of Paper

It feels like just yesterday I posted that I was ‘officially’ a masters student when I got my acceptance letter.

I’ve technically finished my masters. For everyone who expressed interest in reading it, click here! (Or ping me if you’re not able to access it through that URL).

It’s harder to pinpoint that moment where I can say I’ve officially ‘finished’ my masters.

Was it when I handed in the theory?


Or when I finished my practical exam?


Or when I got the results?

Screen Shot 2014-10-07 at 12.59.55 pm

Or when I went to print the final version and had to choose what colour I had to bind it in?


Or was it when I received the final bound copy?


Or was it when I handed in my swipe card, forfeiting my access to the studio space that had been my home for 15 months?



Or will it be in two months time when I don my fancy gown and hat and walk across stage to collect my expensive piece of paper?



Do you ever ‘finish’ being a student?


What are you doing next?

I am currently going through the soul crushing process of sending out lots of CVs/Resumes and not getting any responses in the process of looking for a job for next year. I plan to join the brain drain and hightail overseas. I am looking at technical and / or design related jobs.

In the meantime, I am continuing working at AUT University until the end of the year, picking up the odd contract work, taking a holiday, going to conferences, presenting at conferences, and finishing my app.

What do you WANT to do next?

Live in the Northern Hemisphere, travel around the northern Hemisphere, attend conferences, present at conferences, write code and push pixels as a way of turning ideas into realities and solving problems.

Are you going to do a PhD?

At least not for another decade, one day I’d like to be Dr. Klein, maybe. I’ve been a student for 18.5 years now and being heavily involved with academic communities and the bureaucracy of a university, I am ready to move on and work in industry, rather than do more research. The university has been good to me and given me some great experiences in the 5 years I’ve been employed here but I feel ready to move on.

B+? Are you happy with that?

Not entirely. I know it doesn’t matter in the long run. I know I took some risks and it’s still difficult putting more creative disciplines in an academia space. My academic writing isn’t the strongest and my topic was still too broad. I’m proud to have contributed to the world of knowledge and to be able to justify things I say because my research has proved it.

Was it worthwhile doing a masters degree / in hindsight, would you still do it?

My life over the past 2.5 years of postgraduate study has had some great experiences that I wouldn’t have been able to have, had I not been a student. I wouldn’t have been able to go to WWDC in 2012, 2013 and 2014. It’s highly likely I wouldn’t have met some of the people who have been very influential in my life. I wouldn’t have been a Prezi ambassador and I wouldn’t have gone to work at their HQ in Hungary.

I chose to do postgraduate because when I finished my undergraduate, I didn’t feel like my work was ‘done.’ I didn’t feel confident with the skills I had to go and get a ‘real job’. I will never know what opportunities and experiences I might have ended up with had I gone straight into pursuing a job. So that’s a call I’ll never be able to make.

I’d always felt a bit like I’d drawn the short straw by being in the second intake of a new degree – to my knowledge, I will be the first person to graduate the masters program who has gone all the way through from bachelors. The people in first year now are getting a very different experience to the one I had in first year. It’s been great to see the degree evolve and to some extent, hopefully play a hand in shaping that.

Postgraduate is an interesting beast and also a very lonely experience – you spend a lot of time by yourself reading, writing, thinking. All your friends from undergraduate have gone. You’re working on a specialised topic so it becomes hard to find people to talk to about what you’re doing. Because I was blending programming with academia, neither academics nor programmers seemed to really understand what I was saying half the time.

It’s not for the faint hearted and at times, it takes a lot of sheer will to sit down another day and just write. If nothing else, that piece of paper proves you can preserver and complete something that is incredibly difficult.

Weren’t you working on an app? When will it be on the App Store?

Late 2014. Excite!

When are you going to finish your website?

I’m working on it. Really soon. I mean it this time!

This is not a review of the Apple Announcements

It’s quite likely you’ve seen or heard something about the slew of new phones and devices that have been announced recently. I was up at 4:30am Wednesday morning, ready and waiting for Apple’s announcement. However, as the title suggests, this post isn’t a review of those announcements, but rather, my thoughts of what happened afterwards. As all the big players are competing for not just our money, but our allegiance to their brand, we take it on ourselves to fight the battle. It seems you can no longer say anything even positive about one without being attacked by people who have a different allegiance to you. I wrote (but didn’t post) a blog post last week titled ‘Why can’t we all just get along?’ after I has a particularly resounding experience with this.

Nowadays, it’s more than buying a phone or a tablet: you’re buying into an ecosystem and all the other associated devices in that ecosystem. It’s saying that you have faith in the company to continue making products that you’ll want, especially in an age where obsolescence is a given. You’re committing to that platform and those apps you’ll be pouring money into.

So it’s almost no wonder we’re constantly trying to justify that the choice we’ve made is the ‘correct’ one because we’ve increasingly strongly identify with the brands we purchase.

On the day of Apple’s announcements, I tried to avoid both social and mainstream media. I avoid reading the comments section on anything and on the topic of technology and brand allegiance, Twitter and Facebook is the comments section of the world. The other main reason I avoid it is because in the end, we all have the same point of reference of information, and that is the information Apple has made public to us from the keynote. Some journalists were there in person and got to try out the new toys, but even that would have been a very controlled experience from Apple. To know more than that, you’d pretty much have to work as an engineer for Apple.

Anything else on top of that is influenced by bias. Yes, many blogs and websites offer helpful summaries if you don’t want to sit through the 2 hour keynote but at soon as that keynote kicks off, it’s a race to beat your competitors to being the first to report, to get the most hits. We know that people are easily influenced and often just want to be told what to think.

Anything else is witty commentary, parody videos, speculation and the inevitable Apple vs Android vs. etc. war.

I’m no journalist but I got to experience that this year. Most of the keynote was blur for me as I was working on getting content directly related to the announcements out as quickly as possible.

There will be negative scaremongering headlines soon along the lines of ‘is Apple Pay REALLY safe?’ and without even reading the article, people will make assumptions and form this image their head that Apple Pay is bad. Where in reality, the article could be a positive one and the headline is just click bait (after all, bad news sells papers / gets hits, right?).

So instead, you should make your own mind up about how you feel about the announcements. When people ask me what I think, I can only give you my biased answer based on my previous experience with Apple products because like you, I haven’t seen or tried out the new devices. The products sound great is the short answer. I’ll inevitably one day upgrade my iPhone to another iPhone. I’ll probably get an Apple Watch. I’m excited about new APIs to play with and intend to tinker with the Apple Watch SDK when it becomes available too. I recently spend a couple of evenings tinkering with the Pebble SDK and am interested in the new kinds of interactions wearable interfaces open up.

The thing you need to remember is that any device is not just the hardware that you hold in your hand. It’s the combination of the hardware, the software and the overall design and interface. You need all of those together to make a successful product. It’s also that cliche line we hear a lot – it’s about how it makes you feel, and what got me was this line from the one article I did read about the Apple Watch:

“Or you could send a silent “I love you” tap to your spouse’s wrist when you’re thousands of miles apart.”

With friends and family and people very dear to me scattered across the world, this was an instant tug on my heartstrings. We live in a beautiful age where we can be more connected to people and content. For me that is what is important, to be able to engage with them in a meaningful way, and on a completely personal level, that kind of marketing works on me in a way I can’t quite explain.

So there’s no doubt that the flurry of announcements over the last week weeks has kicked up the dust between all the different allegiances so it seems an appropriate time to include the post I hesitated about putting online last week. Click on the link below to view the full post if you think you can handle being open minded.

Why can’t we all just get along?

Continue reading

I liked iBeacons before they were cool

iBeacon technology is a hot topic at the moment. That’s probably why you clicked on the link that brought you here right?

Since it was announced at Apple’s World Wide Developer Conference last year in 2013, blog posts about iBeacons are a dime a dozen now. I was in the What’s New in Core Location session, which was the first session to mention it. I went along because location awareness was one my first interests when I first began developing for iOS back in 2010.

I actually came across this video where I was presenting on behalf of my group at the end of my second year of university. It was a project we worked on with the Auckland City Art Gallery to reimagine the traditional audio tour guide and deliver engaging and interactive content on a mobile platform. In a perfect world scenario, we envisaged that you could go stand in front of any art work and the app would know which one you’re looking at and present relevant information to you about that art work.

I’m going to skip the iBeacon 101 and assume you know about them already if you’re reading this.

I was excited about iBeacons. Like, REALLY excited. It gave me a way to make the app I’d had in my mind for over a year at that point that wouldn’t have been feasible. It’s the app I’ve been working on as part of my masters which I will be releasing later this year so excuse the vague details in the rest of this post.

In July 2013, I gave a presentation about iBeacons at the Auckland iOS Meetup when it was still under Apple’s Non Disclosure Agreement. A year later, at the same meet up. we’re seeing a live demo of an app built over a lunch time which involves taping a beacon to a dog’s collar.

As expected, retail has also jumped onto the beacon bandwagon fast, many retail stores have already installed beacons with companion apps that offer a more ‘personalised’ shopping experience (or just special offers and deals as soon as you walk in the door). Beacons have the potential to go far beyond galleries and retail (health, for example), but that’s not what I wanted to write about.

Instead, here’s some things that I’ve come across while using them.

Accuracy issues are not new or unique to beacons: it’s always been a problem with location awareness. There’s many different things that can interfere to give you an inaccurate reading. With beacons, you’re likely to have a situation where you have multiple beacons in a space, combined with people, walls and other obstacles in the same space.

In the app that I’m building, I scan for beacons, I get back an array of beacons – didRangeBeacons:(NSArray *)beacons – I was taking that first item in the array because it was considered to be the closest. I didn’t care how far away it was, I just wanted the closest one. However, I found this was only accurate maybe 50% of the time.

I tried taking a sample of 10 values and then taking the most frequently occurring. This made things a bit better, maybe 75% of the time it was correct.

I took a step back: is the first beacon in that array really the closest? I had a look in the documentation which says: “An array of CLBeacon objects representing the beacons currently in range. You can use the information in these objects to determine the range of each beacon and its identifying information.” I went back and checked the very first WWDC session on iBeacons and sure enough: “And as long as that beacon count is positive I can take the first object in that array, which is roughly equivalent to the closest beacon.”

Aha. It didn’t take long to write a simple beacon app: a table view that showed all the beacons in the order that they are returned in the array, identified by their major value. I also included all the information I had about it: the proximity value, the RSSI, the accuracy.



I immediately saw what the problem was. The proximity value corresponds to several enum values:

0: CLProximityUnknown

1: CLProximityImmediate

2: CLProximityNear

3: CLProximityFar

As the image above shows, the array is sorted in terms of proximity and if the proximity of a beacon is ‘unknown’, or, 0, it will appear first in the array and be considered the closest. As I experimented moving my beacons around, I found that if a beacon was shown to be ‘unknown’ it was never the closest. Often it was the furthest away.

One solution that was suggested was to use the RSSI and accuracy and smoothing out the values from that, but this too is useless if you’re getting unknown values (RSSI returns 0 and accuracy -1).

So back to my code, I started ignoring any beacons with unknown proximity values and checking the next beacon in the array until it found one that wasn’t zero. As before, I was doing this ten times, then seeing which value occurred most frequently and taking that as the truth. Finally, this was getting me the correct result pretty much every single time. I was happy.

Then I tried it out in a ‘real world’ type setting with actual people who aren’t me using it and the results weren’t the best. Once again, the users were only picking up the correct beacon 75% of the time.

Using Motion

Another trick I incorporated was to combine motion detection as a trigger to start scanning for beacons. In the context I was designing for, it was assumed that while the device was stationary, the user was still at the same beacon. Generally a pretty fair assumption. Therefore, it was programmed to only scan for beacons if motion was detected.

I started by using CoreMotion and the M7 chip which is available in newer devices (iPad Air, iPad Mini Retina, iPhone 5C / 5S and anything that may emerge after this post is published). The CMMotionActivityManager has several activity types which are booleans, so you can just ask for it to tell you whether the user is stationary. I found this was far too sensitive for my needs, where the slightest motion would set it off. So I tried the activity type for walking and this was not sensitive enough because the latency on it was about 7-10 seconds.

In the end, I went back to the trusty accelerometer, which gave me a lot more fine tuned control over setting a threshold of movement and is available across a lot more devices. I settled on a sweet spot where it didn’t trigger just from holding and moving the iPad, but triggered when I stood up and walked a few paces (or gave it a deliberate shake).


So combine all that:

– Launch app.

– App scans for beacons.

– If more than one beacon is found, cycle through the array to find the first one in the array where the proximity value is not ‘unknown’.

– Repeat 10 times.

– Take the most frequently occurring beacon from the 10 values. This is considered to be the closest.

– Stop scanning for beacons.

– Start detecting movement from accelerometer.

– If movement is detected, scan for beacons.

– Stop monitoring for movement while scanning is taking place.

Simple right?

The problems with peripherals

I was using the Estimote beacons which have quickly become one of the well known makers of beacons with an associated API and companion app which I’ve found handy for the dozens of times I’ve found myself explaining to people what those colourful things sitting on my desk are. I like them well enough (despite having to attack them with a craft knife to change the battery) but I haven’t been using their API as I didn’t want to tie my app down to one specific type of beacon.

The inherent challenge with any beacon app is that it is difficult to ship an app that relies on the presence of beacons, hence why most the use cases currently are for situations where the app is tied to a specific location at the control of the provider of the app – see, retail. You need to have control over the physical environment that it is intended to be used in. This means that beacon apps are not yet suitable for most general off the shelf commercial App Store apps.

For example, it is a challenge I am facing, where I am designing for use in a classroom or lecture hall environment. I have a specific UUID hard coded in that the app searches for but I have no way of ensuring that anyone anywhere who downloads the app will be able to have access to beacons and be able to configure them appropriately. So the app has to be able to function without.

(As a bit of Beacon 101 if you aren’t familiar – you can’t arbitrarily scan for any beacons: you have to know the unique identifier of the one you’re interested in.)

It is problematic relying on any type of peripheral device. It is extra overhead, extra hassle, another extra something that has to be configured an managed. Even something as simple as making sure that the batteries are replaced. In the education arena, a lot of people you talk to will have some kind of tale to tell about grappling with the institution to get the support and resources to manage all the new toys that and commercial products that find their way into education and who is responsible for it. Gone are the days when you used what IT provided to you and any utterance of BYOD makes a sys admin grimace.

It is worth pointing out that iOS devices can act as beacons, which again is more useful for retail outfits which use an iPad for their POS.

So what?

It’s an exciting time we live in: beacons didn’t exist in public knowledge when I started my masters over 15 months ago now. I’ve been playing with beacons for the better part of about 10 months. It amuses me how many blog articles I’ve read or other information pieces on beacons that state that Apple ‘quietly’ announced iBeacon in June 2013, implying that it was something that they wanted to keep quiet. As with everything announced at WWDC 2013 and prior, it was under NDA until about September.

Ultimately, beacon technology is still new. Many of the beacons out there are still beta or developer preview. I used them in the app I’ve been working on for the last 10 months which I demonstrated two weeks ago as part of the examination for my masters degree. It was without a doubt the most stressful part of the demo. The day before, I was still running around the office trying to make sure the accuracy was the best I could get it with all the methods I outlined above.

In the end, it all worked perfectly. But I’m not ready to count on it to work perfectly out in the real world when I’m not there to babysit and choreograph my users.

Beacons open up a lot of possibilities and if nothing else, retail will serve as a guinea pig while we iron out the bugs for other applications for it.

I made a thing…and it is AWESOME

When I started my masters degree, one of my goals was to have something at the end of it that I could hand to someone and say ‘here’s something I made…and it is AWESOME.’

Today is that day. The last leg of my masters degree was a practical examination / demo of the app that I made.

Syncrasy is the resulting practical component of a masters degree in Creative Technologies. It was designed in light of educational technology research combined with software development. The research aimed to bridge the disconnect between the two perspectives by looking at the software development process as an influential factor on the use of mobile devices in tertiary education.

The app itself enables real time collaboration between users based on their location and proximity. The dynamic nature of the interaction means that the shared canvas, built up from images and text contributed by multiple users, exists only while the participants are in close proximity. It adds in the content of new participants who join the space, and removes the content of those who leave.

It will be a couple of months before I officially release it on the app store but if you’re in the Auckland area and are interested in seeing it, I’m doing a public demo tonight – Wednesday 30th July, come along any time between 4-6pm at AUT University in WG1002. For those unfamiliar with AUT, that’s level 10 of the Sir Paul Reeves building on Governor Fitzroy Pl. in the city.

Click here for map

All welcome.

What now?

I was asked the other week to write a short bio for an upcoming conference that I will be speaking at.

This caused a minor crisis as I realised that by the time that conference rolls around, I will no longer be a student. My definition of who I am and what I do will no longer be associated with university or formal education. After 18.5 years as a student, I was at a loss for even how to begin to define what I do beyond ‘Judit is a creative technologist’ (which in itself doesn’t explain much).

I wrote on a similar crisis as I started my masters degree and now as I am so close to the finish line, I am still wondering the same thing.

I like conferences. I often write about how much I like conferences and different ones I attend. One of the many reasons I like conferences are because they are a great way to try out who you want to be. I don’t mean that to say that you should lie about who you are or what you do, but I’ve always found conferences a great way to perfect your ‘story’. You’ll meet so many new people and you rarely have time for deep and insightful heart to hearts (with the possible exception of the 6 hour queue for a keynote). It’s a little longer than an elevator pitch but not by much. You meet someone new and you are asked some variation of “what do you do?”

Because conferences tend to bring together likeminded people around a topic or field – however broad or narrow – to be immersed in that mutual interest for a given period of time, when you introduce yourself, it’s also a way of saying ‘here’s where I sit in relation to this topic’, be it programming, design, gaming, whatever.

We tailor our stories based on who we’re telling them to, so at WWDC I tried on a few different versions of what I’m going to ‘do’ next and seeing what felt right and what the responses were. I’ve face a similar problem before, when I went to a conference just after I finished my bachelors degree. It was a conference with lots of academics so I started the conference with ‘I’m thinking of doing postgrad’ and finished the conference with ‘I’m going to postgrad in this research area…’

It’s like using a coin toss to make a decision: as soon as that coin is in the air, you know what you really want the outcome to be.

I found this problematic at WWDC this year because I’m close to the end of what I ‘do’ and I don’t know what the next thing is. What I ‘do’ also happened to be the reason I could be there at all, so I would begin with “I am a student (here on a student scholarship), one month out from finishing my masters degree.”

More often than not, this would provoke the much dreaded question, which was some variation of: ‘what are you going to do next?’

I came across a comic this week, depicting in a humorous yet strangely accurate way the truth about college/university.

Even before finishing, I am already feeling like this:

Screen Shot 2014-07-08 at 9.50.37 pm

Not matter how many times I’ve been asked, the answer isn’t clear yet. Don’t get me wrong – I have options for what to do next, but it’s more that long term question of what do I want to be.

I was in Sydney for a conference last week (what else?) and noticed something interesting. It was a conference outside of my direct expertise: targeted at sys admins and the content was about deploying and managing devices. A lot of the content went over my head, but in a good way where you actually feel like you’re learning and branching out. In that context, I wasn’t afraid to admit that I didn’t belong, that I had very little existing knowledge of what was being talked about.

Funnily enough, I was giving a presentation. I was terrified for a while thinking that this audience wasn’t going to get anything out of what I had to say but the feedback was fantastic, I was blown away by how many people came up to talk to me afterwards.

The interesting thing was, when I go to programming conferences, for the longest time I have been afraid of being figured out as someone who is just pretending to belong there. However, at a conference that genuinely was outside of my area of knowledge, I wasn’t afraid to admit it, even during my presentation. It was great learning opportunity and as always, I feel privileged to have had the chance to be there.

To return to my starting point: I managed to cobble together a bio (or asked someone else to write it for me). I am speaking at a fantastic iOS and OS X developer conference in Melbourne in September. You should too.

So at least that is something to keep me occupied as I figure out what to do post-postgraduate.

For now, back to battling the thesis.

Screen Shot 2014-07-08 at 10.12.41 pm

Do you even Swift, bro?

It costs $1600 USD for a WWDC ticket – if you can even win the lottery for the chance to secure one. From New Zealand it’s about $2000+ NZD (~$1700 USD) for return flights to San Francisco. Add on the over inflated hotel room charge for the week which even for the minimum of five days will set you back at very least another $1500 USD again (unless you stay in the dodgier parts of San Francisco.)

So, what’s the point? With all the sessions online same day, the sessions themselves are no longer reason enough. So the standard response tends to be the networking and the parties, which you can mostly experience just by being in San Francisco the week of WWDC anyway.

There’s the experience, and I tend to be a big advocate for the whole ‘experience’, immersing yourself for a week in code, the thing you’re passionate about with 5999 other equally passionate and likeminded people. That passion that makes people queue up at the crack of dawn for the keynote. But that’s a blog post I’ve written already, almost exactly a year ago.

After last year, I’d convinced myself that there was no point for me to go again, but sure enough, as soon as the announcements were made, it flipped a switch in me and I was trying really hard to convince myself that I did not ‘need’ to go.

So what caused that? Best as I could understand my own insanity, it was about what WWDC has come to represent in my life in the past three years: my journey through programming and as my last year as a student, why not apply one more time for a scholarship?

I’ve has a lot of discussions both at WWDC and since I’ve gotten back, both with developers and non developers and this post summaries my thoughts on WWDC 2014. My thoughts leading up to it were in the NZ Herald a few days before I left for the US. I knew there would be awesome things in the pipeline, I was pretty sure we’d be getting a new version of iOS and OS X and was pretty sure there wouldn’t be any hardware released. No new iDevices anyway. All the announcements are online so I won’t go over those, but these are my own reflections of what the announcements meant in the bigger scheme of things and for my own work.


“I think a developer would look like a geek.”

The opening words of the WWDC 2014 Keynote presentation. The first few words were drowned out by the audience’s cheering which was set off by the lights going down and when we realised that no one was coming on stage yet, the cheering came to a halt.

The first word I heard after the cheering stopped was ‘geek’. It seemed an interesting note to start the keynote on but the video set the scene for the conference which was to be the idea of making programming more accessible, starting by demystifying who a programmer is. Of course there’s the stereotype of the geek, but the reality is much broader than that now.

The announcements in the keynote were completely software based: no new hardware, no new toys. It made me sad thinking about non-developers watching who would be thinking ‘wow, what a terrible announcement, there was no new hardware.’ But developers rejoyced: so many new frameworks and features to play with. Usually there wouldn’t be any mention of APIs until the following ‘Platforms State of the Union’ session, but there was Tim Cook on stage explaining what an ‘SDK’ is.

“…and for those of you that are not developers, the SDK is a Software Development Kit that enables developers to make all of the amazing apps.”

Then the completely unexpected blow at the end: Swift. For that whole week,the ice breaker for any conversation was to be:

So, what do you think of Swift?

The initial knee jerk reaction to the announcement was ‘oh no, change, we have to learn something new!’ That was my first reaction anyway.

Going to the sessions on it and getting more familiar with, it definitely seems to be well written, easy to use and just as powerful as obj-c. The documentation and resources are excellent and the enthusiasm for it grew over the week.

Though Apple had a few goals with the new language, one of the things they’ve done is lowered the barrier of entry to programming. “Swift is friendly to new programmers” it states in their iBook, “The Swift Programming Language.” When I was starting out learning programming,  I found that every book I picked up began with some variant of “This book assumes you already have an understanding of [other language].” (I ignored thees and continued undeterred).

With the rampaging growth of mobile, with it has come increasing numbers of people from non programming backgrounds who are interested in learning how to make apps (such as myself, four years ago) and this language makes it easier. New people starting out will learn in Swift, not Objective-C, while existing developers can choose to learn Swift or continue with Objective-C. For now anyway. It is expected that in the not  so distant future, Objective-C will be deprecated.

In the demo code from many sessions they were still using Objective-C. In sessions about existing frameworks they used Objective-C, in sessions about new iOS 8 frameworks, they used Swift

I’m not saying this is a good or a bad thing that they’re made an ‘easier’ language.

However, in my personal opinion: by abstracting away the complexity of programming, you start to lose an understanding of fundamental programming concepts. I speak from from own my experience starting iOS development from a zero programming background (and disregarding the disclaimers in the books.  I learned a lot from demo code, tutorials, videos on YouTube narrated by people who sounded like they were 12 years old, hacking things together, Stack Overflow etc. I could make apps and they worked but I lacked a lot of the underlying fundamental computer science concepts, many of which I’ve only started learning this year and the change was incredible: I am writing better quality and more efficient code quicker and it was easier to trouble shoot when things went wrong. This was my fourth time at WWDC and I understood so much more this year.

Swift abstracts away the need to know about primitive data types, like ‘what is an int/float/long/string’ etc. when it automatically infers for you when you declare a variable. You could argue this isn’t that important but when Apple brought out the 64bit chip you started getting apps where certain values were off because developers didn’t understand how this could affect values.

In my degree, we get a lot of artists who use languages and platforms with low barrier of entry to make interactive artworks / installations, such as Processing, and I can see Swift with it’s Playground environment becoming another one of these tools. Are these people ‘developers’? I would argue that being able to write code does not make someone a developer

My personal experience this year

I am an introvert (not to be confused with anti-social):  after spending the day with 6000 people, I struggle in the evenings to make it out to the parties so for me, I focus my energies on the experience during the day, and this year, it meant going to the labs. We all live in fear of asking the ‘stupid question’, especially when you’re asking programming questions from the engineers who live and breathe (and write) the frameworks and APIs every day.

This WWDC was the first time I had an app that was nearing completion and had questions and bones to pick with the engineers. I spent time in the labs for Multipeer Networking, AirPlay, CoreLocation, Prototyping and even queued for an hour to get an appointment at the UI Design lab (30 minutes 1 on 1 with a designer).

One thing I’m increasingly finding with programming is that just because you can solve a problem from a high level, doesn’t mean it’s as easy to solve it at a programming level. Meaning that you can take a step back and say, “okay, this is logically what I would need to do to achieve the desired outcome or functionality”, but at a code level, the APIs or frameworks might not support that. Those are a lot of the problems I’ve faced with my app and the feedback I got in some of the labs.

Unfortunately as well, sometimes there are bugs in the frameworks themselves and the only feedback that they can give you is ‘yeah we know about those, we’re working on it. There’s some fixes in iOS 8. But there’s also some regressions. File bugs.’

It was a mixed bag of results at the labs, but in a way, even when they couldn’t help me with solving my problems, it felt like I was on the right track to creating something new and unique.

Over on my personal blog, it’s been on my perpetual ‘To Do’ list for a few years now to ‘get an app in the store’ which I haven’t achieved yet, but you know what? ‘Get drivers licence’ was on there for quite a few years too and I got there.

When I started programming, I could make an app that ‘mostly’ worked and did what I wanted but it wasn’t pretty. I lacked the knowledge to really polish and finish it, to hunt down problems and bugs and fix them. I knew how to solve problems from a high level but I couldn’t implement the solutions in code (at that point it was more because I lacked the skills to, rather than the tools themselves). I didn’t release anything because I didn’t have anything that was my baby, nothing that was ‘perfect’, nothing that I was proud to stamp my name on and say ‘I’ve made this, and it is AWESOME’, nothing that I was prepared to support for years to come.

And that’s okay. I know nothing is ever going to be ‘perfect’ and you’d kill yourself trying to reach it. There is the mentality of ‘F*** it, ship it’ , the idea that you can fix things in version 1.1 or 2.0 because it’s so easy to ship updates, but I never got close to even a 1.0. However, the store is full of version 1.0 apps that never made it any further and I can say that many of those didn’t nail it on version 1.0 but I suspect it wasn’t something the developer was really passionate about, it was just a //ToDo they wanted to tick off and say it was done.

When I ship my first app, I know it won’t be a million dollar success overnight (or even at all). I know I won’t have Facebook knocking on my door offering me $3b. Those stories are the exception rather than the rule, and I fear that’s what a lot of people are getting sucked into. With Swift, it is a lot easier for anyone to get an app on the store. But is that a good thing?

I just know, that when my app goes on the store, it will be something I’m proud of and will want to show off. It terrifies me thinking ‘what if no one wants to use it, what if I’ve put all this effort in for nothing?’. Worst of all, I have a demo for my masters examiners in a months time and above all, I’m terrified that they won’t get it.

One significant difference this year was after the month I spent solidly coding earlier this year, I was able to get a lot more out of the session. The first WWDC I went to in 2011, I’d been coding for less than a year and almost all the sessions went over my head.

It’s hard to say exactly what was different but in the demo code and the presentation slides, I just understood a lot more. The code made sense. I could see how I’d implement it.

Overall, it was a fantastic WWDC, the best one I’ve been to so far. I got a lot out of it in terms of the technical and the social. I am hugely grateful to all the engineers who take the time during the week to be in the labs and be hassled by developers and answer questions.

Slowly, I’m starting to feel like I belong here (as I’ve said before: any hesitation on that has nothing to do with my gender). I wouldn’t call myself a developer. I can write code, I can solve problems from a high level and learn what I need to know to implement it in code.

So who is a developer? Yes, many developers have interests or hobbies which fall under the geek umbrella. I’ve embraced the label since I was 15 (despite protests of ‘you’re not a geek – you’re a girl!’). The conference began by addressing this stereotype but then went in the direction that you also don’t have to be a geek to be a developer (and you don’t have to be a developer to be a geek!). The video also explained that developers are pretty much wizards…and you can be too.