Starting a New Project

I’m moving on from QR Codes. They were fun to play with. And I also got to see some of the Core Graphics and Core Filter APIs. I’ve also decided that the CoreX stuff will need a thin abstraction layer for proper usage in Swift. For example, instead of using a CFDictionary, I want to use a native Swift dictionary.

My real worry with starting a new project is failure to complete. I have a real desire to get this one onto the App Store. If history is anything to go by, the odds of that are grim. Even so, I thought I would go ahead and blog about the process I am going through. Maybe someone will read it and give me ideas.

The new project I have in mind has been given the code name Leonardo. I actually hope to make that my product name in the App Store. It’s an OS X project that will be implemented in Swift. Obviously I will have to use AppKit and some of the Core Foundation and other frameworks. Hopefully I can structure the code so that the majority of it is straight up pure Swift. The reason to do this is I like Swift a heck of a lot more than I like Objective-C. I may even like it more than C.

So anyway. The first step in creating a new project is to have a mission statement, so to speak. It’s the goal of the project. It’s the thing that should not be lost sight of. I’m using TextEdit to write down my notes on the project. Here is the statement:

Mission Statement
Mission Statement


My notes include more than the mission statement of course. I have a strategy for achieving the goal, if I can indeed achieve it. A fundamental part of the strategy is the file format itself. It needs to contain the information acquired from the user via the UI. I’ve decided to use a bundle of JSON files. There will also be bitmaps stored in the bundle if they are imported into a project.

The first step in my project will be rather peripheral. I will be working on importing an SVG file, which is in XML, and converting it to JSON. I will also go the other way. On top of that, I will render the SVG file to PNG, JPEG, and PDF. I will also support SVG output. I’m not sure that SVG supports all the features I have in mind though. So SVG output may lose information or get really complex to create the data for rasterization.

XML is actually kind of complex to parse, unlike JSON. I will almost certainly use the NSXMLParser class to do the work for me. While I would love to do the job in pure Swift, it’s work that is not core to the project.

JSON may not be compact like a binary format, but the human readability is a great benefit. There are also wins in using JSON. To give a hint at what I mean here, Lisp is written as a Lisp data structure. Code is data. JSON is also well defined and has all the necessary elements in it to describe an image.

I do worry about recording all the user input taking up too much memory and creating very large files. I’ll have to do empirical testing to see if that will be the case. The part that will take up the most space will be the free hand drawing. Basic shapes that are part of SVG and Quartz Core won’t present such problems.

Another concern is processing the data fast enough to give the artist the impression that he is manipulating pixels rather than vector data. There is more work here than simply pushing pixels around. Again, I do not know if this will be a problem or not. Humans think in millisecond time scales at best. The CPU clock is clicking away at over a billion times per second. That’s about a million times faster. Also, the typical screen refresh rate is only 60Hz on an LCD display.

I should be able to beat that window.

This is actually an ambitious project I am taking on. There are already programs out there that allow users to do all sorts of fantastic drawings. I’m looking for a different approach that I hope will prove to be advantageous over traditional bitmap drawing programs.

Placing a logo image inside a QR Code

It is perhaps not all that well known that QR Codes were designed to be readable even when damaged. This is a clever feature built into the codes by using error correction to allow for smudges, wrinkles, and other damage to allow the QR Code to still be successfully scanned. This feature has the side effect of allowing the embedding of an image inside the QR Code, damaging it in the process, without seriously affecting the ability of the code to be scanned.

Some people have already taken advantage of this feature. You may have seen it before. I present to you a simple bit of Swift code that I wrote and ran from inside Xcode to create such a QR Code for a friend of mine to advertise his YouTube site. This code can be freely modified as required for your own use. I don’t see a need to copyright¬†something so trivial.

[gist user=”DavidSteuber” id=”a6bf3df8c290ffc3839a”]

QR Code with logo
QR Code with logo


The reason for creating such a large code in this case is so that it can be printed onto a business card, sticker, or what have you without needing to scale up. Certainly there are other ways to do the job. In this case, I’m pretty much treating Swift as a scripting language for a one off job.

The advantage of this approach is that the method for creating such a QR Code using Quartz in an application is very obvious without having to create a bunch of boiler plate code for a proper app. The one downside is that it doesn’t show how you might do the same job in iOS and save the output to your photos. This script runs on OS X.

Stream of ADD

I’ve been distracted again. Not by a project this time. Just by my own inner thoughts. I have this thing that makes it difficult to remain productive for long stretches of time. I worry that it will make me never finish a project of any significance. Mind you, any true work of art is never finished, only abandoned.

Right now, as I type this, I’m not even thinking about what I’m writing. And I’m not a good multitasker. My mind just reals with ideas, thoughts, images, and sounds. These are going on all the time except during rare moments when I feel calmed. This is not one of those moments.

A while back, I was inspired to do a nutrition app for iOS. Something that could be hooked up to Health Kit. The problem that really needs solving is getting a database of packaged foods so that I could scan in the code with the camera and look up that item to calculate such things as calories, sodium, fat, vitamins and minerals, etc. It’s a good app idea. But I haven’t gotten far with it. Getting the database together has been an issue.

Scanning codes led me to the QR Code. I’ve only been peripherally aware of their existence until I wrote two proof of concept apps. Both are on GitHub. One is called HelloUPC. It’s a simple proof of concept for scanning all sorts of bar codes, including QR Codes that are recognized by Core Image. The other is QRTest, another very simple app that lets you type in arbitrary text and produces a QR Code for it.

I’ve played with both apps a surprising amount. Neither are intended as a product. They were just learning exercises.

The idea of using QR Codes for business cards is not a new idea. It is in fact probably a great idea. You can store more than a simple URL in a QR Code. You can store a vCard in there. There is a limit to the data that can be stored. But partial information that allows you to scan a business card that uses a QR Code to encode a vCard is a nifty way to get the information from the business card into your contacts list.

One small problem. I’ve lost all enthusiasm for that project. It’s gone. What happened instead is this crazy idea that it might be possible to store a QR Code as a PNG file in a QR Code for that very same PNG file. I don’t have the math to prove if this can be done or not. Nor do I know how much time it would take if it can be done to find a point of convergence where the data in the PNG file matches the the data in the QR Code. I did find that Core Graphics does not provide a means of writing 1bpp PNG files which would be necessary to get the file size small enough to fit in a QR Code if it can be done at all. So now I’m looking at libpng to do the job.

While libpng is a nice library I’m sure, it is not as simple as using Image IO to write out a graphics file. Or maybe it is. There is a surprising amount of documentation to read to understand how to use libpng. Mind you, I did read a lot of documentation to get as far as I did with writing out QR Codes to a PNG file that is not 1bpp.

So why did I lose interest in the idea of using QR Codes for business cards? I don’t have any business cards. I don’t go out and exchange business cards. I can’t eat my own dog food, an expression that basically means you should be using your own software so that you are a user as well as the author.

I’ve had numerous other app ideas that have gone by the wayside. Usually either due to distractions or the realization that I’m aiming too high for a single developer. Programming is hard. It is amazing how little code you can end up with at the end of the day.

It goes something like this. You have an idea. So you start a test implementation. It’s crap. So you rewrite it. It gets a bit nicer. But you find other ways of doing it that are even better. So you rewrite again. Lines are added and then taken away. It’s like modeling clay except that the clay is spells in a text editor for doing the magic things that computers do. My programming style is very organic in this way. To keep things clean, it is necessary to constantly go back and organize things so that what you grow is a nice tree and not a thorny bush.

Working with these APIs and frameworks has also demonstrated something else to me. Apple has put together a rather nice package of tinker toys. A lot of hard things have been made easy. Mind you, a lot of easy things are still hard. Also, building a complete application from the ground up is not an easy thing to do. There are usually many people who have their hands in the cake, so to speak. You have UI people, UX people, algorithm people, art people, code people, etc. Often there is a lot of overlap.

When you are going it alone, you wear all the hats, including the marketing one.

There’s more. OS X and iOS are not entirely the same. iOS is not simply a subset of OS X. Certainly there is a lot of that. But there are also some fundamental differences. One is based on mouse and keyboard input. The other is based on touch input. Not just touch input either. iOS devices can know where they are and how they are moving. Those are also forms of input. iOS’s different hardware allows for an entirely different style of app than you get on the desktop.

Now here’s the rub. I sporadically come up with ideas that can work on one device type or the other. Now I’ve got one that should work on both. This is where the UI issue comes into play. OS X uses AppKit. iOS uses UIKit. How would one go about writing an app that crosses that boundary in such a way that you have a proper experience on an iPad and on an iMac?

I think that is an interesting challenge. How did Apple do it with Pages? I suspect one has to work at a lower level than both AppKit and UIKit where the common subset of functionality is. And it would be very nice for the file formats to be common to both iPad and iMac (including the laptops of course).

All These Worlds
All These Worlds