NBSoundCloudActivity: A UIActivity for SoundCloud

As I’ve previously mentioned on this blog, I am currently working on a new iOS audio recorder app called SimpleMic. One of the primary goals of the app is to allow the user to get recorded audio off the device and onto one of many endpoints as easily as possible. With the arrival of iOS 6, Apple have introduced a very nice API for concisely encapsulating this type of integration and presenting it via an OS-standard user interface. The API I am referring to is UIActivity and it’s user interface companion, UIActivityViewController.

When I decided I wanted to include the ability for users of my app to upload recorded audio to SoundCloud, it was clear that UIActivity was the right way to enable this feature. With that, I give you NBSoundCloudActivity, a simple UIActivity subclass that wraps the SoundCloud sharing SDK and that is available under the MIT license for use with your own apps.

If you have the need to provide sharing to SoundCloud, this class makes it as easy as dropping in your SC credentials and displaying an instance of UIActivityViewController. Your user’s will get a standard iOS activity view that includes a “SoundCloud” button, complete with official icon.

Tapping the button will bring up the SoundCloud SDK UI for sharing where further info about the audio can be specified.

The full source is available on Github at the below URL. The README has a good amount of info on how to get started and the code is well unit tested. If you have any questions or feedback, drop a comment here or contact me directly.

NBSoundCloudActivity Source Code

And if you’re interested in finding out more about the app that inspired this code, check out my SimpleMic preview page and sign up for email updates

That’s all for now, for more musings on software development and other such things, you can follow me on Twitter at @nickbona.

UPDATE: I’ve now also turned this project into a CocoaPod, so it’s even easier to add it to your own project. Enjoy!

Tagged with: , , , , , ,
Posted in Development

3 Reasons Why You Should Make Your Apps Accessible

Since the advent of the modern personal computer, assistive technology has been improving the lives of disabled people in very significant ways. With the arrival of VoiceOver on both the Mac, and soon after, on iOS, the potential for disabled users to interact with their devices in just as meaningful a way as everyone else is as real as ever. These technologies work great when it comes to default system activities and applications, but they require special consideration from developers in order to operate properly with third-party apps. Making this extra effort can be a hard sell to developers who already have enough to worry about just trying to get an app out the door; being a developer myself, I can completely understand the urge to not worry about such things. But despite what you may think, making your app accessible is something that ultimately will benefit you the developer, as much, and possibly more then it stands to benefit your potential disabled users. Here are three reasons why it’s a good idea for you to spend the time to make your apps accessible:

1. Users Will Love You

It may be obvious, but user’s who rely on the accessibility features of iOS will love you if you take the time to make their lives easier. “But those users make up such a small percentage of all users, never mind the fact that my app already targets a subset of all users”, would be the most logical response to that statement, and it is a valid response. After all, you’re likely running some form of a business and at the end of the day, you need to prioritize your efforts in order to maximize your potential profits in the face of limited time and resources. However, what you may not think about is the fact that these disabled users do not live in a vacuum, they have friends and family that aren’t disabled and could also be potential users of your product.

I was recently in search of a good news aggregator app for the iPad, none of the major apps (News 360, SkyGrid, FlipBoard) included good VoiceOver support so I used them as best I could, but wasn’t very happy. Despite the fact that they are all fine apps for my non-VoiceOver-using friends and family, it didn’t occur to me to ever mention any of them as I wasn’t very delighted. Flash forward a few months and finally, FlipBoard decided to add comprehensive VoiceOver support, thereby making it the only news app I use and one of the most used apps on my iPad. I instantly became excited enough to recommend the app to all of my friends and family and just like that, FlipBoard managed to gain a handful more users just by delighting this one VoiceOver user. So, delight your users and they will be your biggest champions. Adding accessibility support is an easy way to delight a very small but very vocal set of users.

2. Automated UI Testing

There’s no question that good automated testing is highly beneficial to any project and with mobile development, this is particularly the case. On iOS there are many options for functional testing, but the two most notable are Apple’s own UIAutomation framework and the KIF (Keep it Functional) framework from the folks behind the Square mobile payments app. You may not realize it, but accessibility plays a critical role in the ability for both of these frameworks to operate. You see, on a touch UI, it turns out that the best way to distill the various interface elements into something simple enough for a test framework to understand and navigate is to apply the same rules used by VoiceOver. The more descriptive you are in labeling your UI elements for VoiceOver, the easier it is to define how your tests interact with them and vice versa.

3. Simplify It

The rise of the mobile device as a computing platform has ushered in a new era where the general value and usefulness of an application is not determined by how much it can do, but by how well it can accomplish a finite task in an elegant and understandable way. Smaller screen sizes have forced developers to rethink everything about what makes a good user experience and what worked on the desktop, in many cases, has no place on mobile. The general rule of thumb when it comes to designing a great mobile experience is simply put, simplify. The most highly acclaimed apps are those where the developer made the difficult decision to remove features in favor of a non-cluttered interface and a straight-forward user experience.

Mobile users demand simplicity, and that fact holds even more true for visually impaired users. For a VoiceOver user, an app is reduced to a much more linear experience (see this video on how VoiceOver works on iOS) whereby any visual fluff or sugar is unceremoniously dropped for an experience that favors content and navigation. Thinking about your app’s design in these terms will force you to focus on what is important and will undoubtedly deter you from getting caught up in the temptation of excessive visual candy. Don’t get me wrong, some of what is lost on visually impaired users (the general stylistic aspects) are still important, and worth perfecting, but never at the expense of the sort of simplicity that a VoiceOver user demands and that will be generally beneficial to all users.


Justifying the effort needed to make your app accessible can be a difficult proposition, but I hope that some of the points I’ve presented here will convince you that it’s worth it. In the end, creating an app that you can be proud of is really about creating an app that people love. And what better way to create an app that people love than by making sure it is a great user experience for as many users as possible? So next time you’re ready to submit your app, take a few more minutes to make it accessible. I promise it will make you feel like a better person, and you might just score some extra sales as well.


Here are a few great resources that will help you to improve accessibility in your iOS app:

Tagged with: , , , , , ,
Posted in Development

New iPhone App: SimpleMic Audio Recorder

It’s been a long time coming, but I’m finally in the swing of things, working on a brand new iPhone app. SimpleMic is going to be my take on a very well known utility, the audio recorder. In getting back to doing solo development, I agonized for a long while over what type of app I wanted to create and time after time, I kept coming back to the notion that the best apps come from a passion to create something useful that people love. “Delight your users”, one of the most important directives to be considered when creating software or any type of product for that matter. Personally, I’ve found that you can’t delight your users unless you yourself are delighted by the app you are creating.

And so it is with that knowledge in hand that I landed on this app idea. n addition to being a software developer, I am a musician, and often find myself wanting to capture quick audio recordings on my iPhone. Unfortunately, none of the existing audio recorder apps provide the type of stream-lined and intuitive experience that I an many others have come to expect from stand-out iOS apps. I have something fresh to bring to this app concept, and though it has been done time and again by many apps, SimpleMic will be, simply put, better.

That’s all I’m ready to say at this point, I’m still in the late stages of design with some prototyping. I am taking a significantly different approach with this app, trying to get the word out early, and trying to do a lot more up front design work. As the project progresses, I hope to give some insights into what’s going well, and what’s going poorly, so that people can learn from some of my pains and triumphs. 

If this app sounds like something that may be up your alley, head on over to the SimpleMic launch page and sign up for updates. 

Tagged with: , , , ,
Posted in Development, Technology

An Introduction to iOS Analytics

I have employed the use of  an analytics service in many of the iOS apps I’ve been involved in and as such, have learned to appreciate the overwhelming value of doing so. Recently, I came across yet another new way to use analytics that excited me enough to write a post outlining some of the things I’ve learned. This post is not an in-depth comparison of various services, but rather a high-level list of tips and suggestions to help you ease into using analytics and make the most of your chosen service.

Choose a service

The first step in your journey towards making use of analytics is to choose a service. There are many options, below is a list of the most notable:

These services are all fine choices, my personal preference is Flurry due to the fact that all of it’s features, including the more advanced, are free.

Wrap it up

Once you’ve chosen an analytics service, you’ll be provided with instructions on how to integrate the SDK into your app. You’ll also be provided with code examples that demonstrate how to interact with the SDK to log events. Do not add these interactions directly to your application logic, rather, create a wrapper service that delegates any analytics related calls to the underlying SDK of your choice. This wrapper service is also a good place to perform any other SDK-specific interactions such as initialization and defining uncaught exception handlers. The wrapper service should be the only class in your entire application that knows anything about the specific analytics service you are using (i.e. this should be the only place where you declare imports of the SDK headers). Protecting your application code from SDK-specific details will give you a great deal of flexibility, allowing you to easily adjust every aspect of how you interact with the analytics service,or  even to swap it out for an entirely different service. These services are adding new features frequently and you don’t want to be changing thousands of lines of boilerplate API calls should you decide to switch to a different provider.

Track your screens

One common use of analytics is to determine how a user is interacting with your app at a per-screen level. What path are they taking through your app on each session, how long are they spending on each screen, etc. If your app consists of many views, this can result in lots of repetitive code. Mitigate this by creating a base subclass of UIViewController. In this class, you can start a timed analytics event in “viewDidAppear:” and end that event in “viewDidDisappear:”, passing the class name as the event identifier. Now simply determine which of your app’s view controllers you want to track, and configure them to extend this new base class. You can use a similar strategy for tracking things like button taps, but do remember that most analytics services limit the amount of unique events you can track.

Capture errors

Every app, no matter how polished, will encounter problems once it’s out in the wild. Code that needs to respond to error conditions via NSError or any other error mechanism can present those errors to the user, but the vast majority of users will not know what to do in such a situation and are very unlikely to notify you of problems. Worse still, if they do decide to report a problem, they have likely already dismissed the error message and won’t remember details about it. Don’t rely on your users to report errors, rely on your analytic service. Recover from errors as gracefully as possible, show the user a generic alert if you see fit, and log as much detail to your analytics as possible. This way, if a user does report a problem, you’ll instantly have access to as much detail as you need to fix the problem. Moreover, you can keep a close eye on the error events that are coming in so that you can proactively solve problems that users may not even report.

Debug and tune

I recently encountered an issue in an app I work on regarding accurate acquisition of a user’s current location via CLLocationManager. The problem was only occurring for certain users in certain places, particularly where the GPS signal was spotty. After extensive reading around the various switches and levers on CLLocationManager, I was confident that I had to start tweaking various parameters. I added a copious amount of event tracking to my location acquisition code, built the app for my device, and spent a few days testing the app in a variety of locations/situations. After analyzing the data, I was able to quickly adjust the parameters to sensible values and the result was better tuned location code than any I could find online to date. The take away here is, if you yourself, or your users, are running into inconsistent or mysterious behavior that only seems to surface at random times, be liberal with event logging around suspected trouble areas in your code and you’ll be able to use the resulting data to very effectively isolate and solve these problems. Likewise, if there is a piece of code in your app that you’d like to tune based on real world data, analytics are the perfect solution to gather the relevant data.


Analytics are one of the best ways to acquire data about how users are interacting with your app. I have had a very positive experience using them in my own apps and hope that these tips and suggestions will help you adopt the use of them in your own. If you have any additional tips, tricks, or suggestions around mobile analytics, drop a comment below. For more content like this, you can subscribe to this blog or follow me (@nickbona) on Twitter.

Tagged with: , , , , ,
Posted in Development


The iOS platform relies heavily on the use of gestures to enable both basic and complex activities across the core OS and within applications. One such gesture is the “swipe” gesture, which in most cases is used to advance content within a view, either horizontally or vertically. It is a very natural way to interact with what’s on screen and is particularly suited to scrolling, being extensively used as such in iOS via UITableView and its’ superclass, UIScrollView.

UITableView and Vertical Scrolling

UITableView is perhaps one of the most common iOS UI elements. In it’s basic form, it displays a list of cells that takes up the majority of the screen and allows the user to scroll vertically by swiping up or down. It was used in many, if not all of Apple’s own apps from day one, and its’ presence is almost instantly recognizable. Additionally, while scrolling, UITableView (via UIScrollView) provides a scroll bar at the right of the screen and it is possible to programmatically flash this bar in order to indicate the ability to scroll. And so it is, that the swipe gesture in this case, does not have a discoverability problem. A user sees a list of elements on screen, and they know, that more often than not, they can swipe to interact.

The Problem: Horizontal Scrolling

What if however, you are presenting data using a UIScrollView in a horizontal orientation. If the UIScrollView has paging enabled, i.e the scrolling is paged, rather than fluid, the Apple convention is to use a paging indicator (the little dots that can be seen on the iOS home screen). That’s fine if there is a relatively small number of pages, but if there are too many pages, you run the risk of your page indicator running right off the side of the screen. If your scrolling is fluid, you have a few other options. You can rely on the scroll bar, programmatically flashing it when your view appears, but the user won’t be looking for this and might miss it. Alternatively, you can modify the size and number of elements so that part of the first or last element hangs off screen (this technique can be seen in Apple’s Trailers app). When trying to address this problem in Serentripiti, we were not satisfied with any of these approaches and opted for something more obvious.

Another Solution: NBSwipeIndicator

That something is a very simple UI element I’ve created called NBSwipeIndicator. In a nutshell, it displays a set of arrows below or above horizontally scrollable content to make it clear to the user that they can swipe to interact with said content. It supports the ability to dynamically show left, right, or both arrows, depending on which portion of the content is currently visible. This dynamic behavior can be driven directly or based on paging, so it should be flexible enough to support several scenarios. Finally, the arrows are drawn with CG routines so it is very fast, but also allows size and color customization. If you want to use custom art for the arrows, they are just UIView subclasses within the NBSwipeIndicator subview, so they can be easily swapped out with UIImageView instances.

check out the latest version of Serentripiti (it’s free), in the Trippy Creator (pictured above).

The source code is available under a liberal BSD license for use in your own projects and can be found on GitHub in my NBUIElements repository. If you use it, let me know what you think in the comments or contact me directly.

Tagged with: , , , , , , ,
Posted in Development

Why Are You Changing My Twitters?

I am undoubtedly late to the party in commenting on the significant Twitter overhaul that rolled out late last week, but I’d like to comment nonetheless. Whether the changes in both user interface and experience are ultimately for the better or worse has been a topic debated at nauseam, but I feel as though many such debates are simply missing the point. Twitter didn’t completely rework their design because they thought the old version was unusable, quite the opposite, I think the old version was very usable and I believe they thought nothing but the same.

As of 2010, Twitter revealed that 75% of their traffic was API. There is a thriving ecosystem for apps and websites that make use of this API, from simple apps that let you read your tweets and post new tweets, to apps that make creative use of your social stream to enrich the news (Flipboard). The platform is clearly thriving, arguably one of the most, and there is no evidence of growth slowing down in that regard.

And that is the exact problem that they are trying to solve with this redesign/rethink, yes, problem. From a purely growth/user-base standpoint, Twitter made the right move by providing and continuing to improve on a very thorough and complete API. But now they have found themselves in a situation where they are basically a stream of information that powers thousands of applications that result in zero traffic to any of Twitter’s official properties. They have millions of users interacting with the service via apps and websites in a way that is dangerously detached from Twitter as a destination.

Dangerous because they need to make money, eventually, and I don’t think they have a good solution that involves API. They could charge developers for usage, but that cost will basically fall on to the end-user and result in people flocking to alternatives. Moreover, they still don’t have a reliable enough system to offer any sort of service level agreement to potential paying customers. They could try and find some clever way to inject advertising into API responses, but again, forcing this down the throats of developers would likely drive them and their users away in droves.

So where does that leave Twitter as a service, a product, and most importantly, a company? It leaves them in a situation where they need to find ways to make their own consumption of Twitter the service/platform more appealing to end users. Barring some revolutionary new way of monitizing a web property, they can’t survive without sucking people into their own website and apps. The new Twitter is a direct attempt at doing just that, and I’d be very shocked if it were the last such attempt.

As far as what I think of the redesign, I don’t think it really matters what I or anyone else thinks of it. In the long run, I believe that Twitter are making the right move by taking what others have done with the service, and using that knowledge to shape their vision for the future. They have taken the approach of starting out open, allowing the platform to grow organically, and then applying what they learn from how others make use of it back into the product. It is yet to be seen if this approach works, but I commend them for trying to innovate in the face of a very vocal user-base. social

Tagged with: , ,
Posted in Technology

“Speak Selected Text” in OS X Lion

The arrival of Mac OS X Lion (10.7) several months ago marked yet another milestone for Apple’s desktop operating system. It ushered in several significant improvements such as iCloud, versions, revamped apps, and of particular interest to myself and the rest of the visually impaired community, some accessibility enhancements. After some much needed hardware upgrades to my aging Mac Pro, I installed the new OS only to come to a disturbing realization, a feature that I rely upon perhaps more than any other, was not working the way I had come to expect.

Why, oh why, had this feature that I had come to depend on so heavily, now suddenly become 50% useless? I scoured the internet for answers and finally came across the explanation in a somewhat unlikely place. This Firefox bug and subsequent comments describe, in very drawn out terms, a distinct change in the way “speak selected text” worked in 10.6 and earlier, and how it now works in 10.7. It turns out that in 10.6, there was a “Speech Service” agent, always running, and waiting for the configured hotkey to be activated. Upon activation, it would, **hand wave**, trigger a copy of currently selected text, and then feed the contents of the clipboard into the speech synthesis engine. In 10.7, it seems as though Apple have “upgraded” this process to instead capture the target text via well-known accessibility APIs. This method is arguably less “hacky” and more “correct”, but is much less likely to work across every application. While copy+paste works in pretty much every app, an unfortunately low percentage of apps are thorough about enabling accessibility. And thus, the “old” method for enabling this feature seems much preferable.

Naturally, my next thought was “hey! I’m a compitent developer and stand-up human being, so why don’t I try to recreate the 10.6 method as a headless Cocoa app”, and so I did. One coffee-enhanced Friday night later and “SpeakCopy” was born. I won’t go into code-level details on what this app does, but a technical overview follows:

  • Registers a hotkey listener for “option+control+f” (hardcoded for now).
  • On hotkey press, uses keyboard events to send a “command+c” to trigger a copy event.
  • Fetches the text from NSPasteBoard and feeds it directly into NSSpeechSynthesizer.

The app uses the speech defaults you configure in the “Speech” pane of System Preferences and has very similar behavior to the 10.6 feature.

If you’d like to use SpeakCopy, you can download the app here. Once you double-click to run it, it will start listening for the hotkey. You can also add it as a Login Item by going to the User Accounts pane in System Preferences, this will make it start automatically every time you reboot your Mac, which is very convenient.

The full source code is also available in this GitHub repository. Feel free to check it out and improve it. Hopefully Apple will address the shortcomings that this project remedies, but if it doesn’t, I will continue to maintain it.

Tagged with: , , , , ,
Posted in Development

Hello World

Hello, my name is Nick and this is my new blog. I hope this is the start of a long and meaningful journey that sees me doing something I’ve longed to do more of for quite some time, writing. For now, I’ll be focusing on mostly software development and technology with a bit of fatherhood and life sprinkled in for good measure. Eventually, I hope to rekindle my love affair with music and use this site to share some of my own creations with the world. I won’t bore you with anymore details, if you are interested in learning more bout who I am, you can head over to the about page. Otherwise, you can look for my first “real” post within the coming days.


Posted in Uncategorized