What is Link Retargeting?

This post was orgininally published by Serge Salager . Serge is the CEO and Founder of RetargetLinks. The post can be read here

To put it simply, link retargeting is just like traditional ad retargeting. The key difference is that instead of having to send customers to your site, you can display retargeted ads based on the link they click. And it can be any link – not just to your website.

Link retargeting really allows you to take your content, social, email, or even AdWords marketing farther! We’ve put together five key tips you need to know to get started.

Can I shorten a link to any content?

The short answer (pun intended!) is yes! You can shorten any link on any platform to any site. To make the most of your efforts, we recommend making sure the content is relevant to your brand. This way, you’ll improve the odds that your target customer will click.

As an example, Pampers is using link retargeting to target ‘first-time moms’. They chose to direct their audience to a relevant article in Parents Magazine: “How to prepare for your first baby?”

Step 1: The advertiser posts “retarget” short links through social media, email, press or influencer platforms.

Step 2: The service will retarget only those that click on the link. In this case, it will show 150,000 banner ads to 10,000 people.

Can I use link retargeting on a standard “long” link?

Link retargeting is not possible with a standard link. This is because it requires specialized technology that allows the link to place a retargeting cookie on the computer of the person who clicks.

We’ve developed this software to make it really easy for you to turn your standard links into retargeting short links. All it takes is the click of a button in your RetargetLinks dashboard.

Can I customize my short links?

Absolutely. Our short links are quite flexible, to allow you to have them appear exactly how you’d like.

You can customize the default re.tc links (this is a link to our patent for example: re.tc/patent). You can also request a short vanity URL (su.tt or jmpr.rocks are examples from some of our clients).

Note: In the vanity URL example, you’ll need to buy the short domain name first and then follow the instructions provided in your dashboard to start link retargeting using your own short links.

When running AdWords campaigns, you’re actually able to hide the short link within your AdWords ad link (see more here on how to set up a search retargeting campaign).

How many ads will be shown and where?

Our default volume cap (the maximum ads we show per person) is 15. This displays up to 9 ads per week, 5 ads per day, and 2 ads per hour, depending on the audience. We do this to keep your brand top of mind over a two to three week period, following the launch of your campaign.

We display banner ads just like a traditional retargeting tool. Your ads will display in Google AdX, OpenX, Rubicon, AppNexus and other real-time bidding platforms across premium online publications like Vogue, Elle, Fortune, FastCompany, Wall Street Journal and all other ad-supported sites.

How do I know if my link retargeting campaign is working?

There are three key metrics we use to determine whether a link retargeting campaign is working. They are: link clicks, ad clicks, and conversions. We’ve included some steps here to show you how to measure these metrics.

Step One – Measure Your Link Clicks

Make sure your link retargeting campaign is reaching your target audience. Emails, online articles, social media posts, newsletters, press releases, and even Google AdWords are all ways for you to share your short links.

If you’re just starting or are looking to reach out to more targets, we recommend using RetargetLinks as a prospecting tool. You can do this by boosting posts on social media channels, or paying for ads in Google AdWords.

Then, you can tell if your campaign is working by looking at the number of link clicks on your Links Dashboard.

If you’re sharing the right content to the right audience on the right channels, you’ll have a lot of clicks. The example you’ll see next is from a campaign run by the team at Traction Conference. As a result of their RetargetLinks content campaign, they had 85,138 clicks (58,296 unique) from 873 links shared via their email newsletter (direct), Twitter and Facebook pages.

Step Two – Measure Your Ad Clicks

The second indication to help you measure your campaign is to look at the number of ad clicks on your Ads Dashboard.

When you display relevant and compelling banner ads, you’ll catch the attention of your targets and encourage them to click to find out more.

Helpful tip: banner ads are most effective when they have consistent branding, simple messaging, a clear call-to-action (CTA), and even some element of animation. 

In the above example, Traction Conference managed to display 161,340 retargeting ads to most of the 58,138 people that clicked on their short links. Out of those, 422 people clicked for a 0.26% click-through rate. Note that this is three times the 0.10% average for banner ad performance!

Step Three – Measure Your Conversions

The final indication of performance is to look at the number of people that land on your page and ultimately the number of those that convert by purchasing your product or subscribing to your service.

In the case of Traction Conference, 947 people landed on the marketing page and 186 actually went on to purchase a ticket for the conference. The team was able to achieve a 20% conversion rate. Note that this is 10 times greater than a typical retargeting ad conversion rate.

Summary

Hopefully if you’ve made it this far down the post, you have a better idea of how link retargeting works. Now you are ready to make the most out of your campaigns.

If you have any questions, don’t hesitate to drop us a line as we’d love to hear from you! If you’re ready to get started, click here to create your first shortened retarget link!

This post was orgininally published by Serge Salager . Serge is the CEO and Founder of RetargetLinks. The post can be read here

How to Design Delightful Experiences for the Internet of Things

This post was originally published on Toptal.com BY SERGIO ORTIZ – DESIGNER @ TOPTAL

One of the next technological revolutions on the horizon is the emerging platform of the Internet of Things (IoT). The core of its promise is a world where household appliances, cars, trucks, trains, clothes, medical devices and, much more would be connected to the internet via smart sensors capable of sensing and sharing information.

As its presence in our lives grows, the Internet of Things (IoT) will be fundamental to most things we see, touch, and experience—UX design will play an important, if not essential, role in that advancement.

From healthcare to transportation—from retail to industrial applications, companies are constantly searching for new ideas and solutions in order to create new experiences, deliver greater value to customers, and make people’s lives easier and more efficient.

If you think you don’t know what IoT is, you’ve probably already experienced it and just didn’t realize what it was. Home automation hubs like Google’s Home and Amazon’s Alexa, the Nest Learning Thermostat, Philips Hue lightbulbs, Samsung SmartThings, Amazon Go, and fridges that monitor their contents all fall into the IoT category.

Flo, smart residential water systems that monitor water efficiency, leaks, and waste

The next wave of IoT will connect millions of devices across the globe and make homes, cities, transportation, and factories smarter and more efficient. Real-time data produced by hundreds of IoT sensors will change the way businesses operate and how we see the world.

The skills needed in this new paradigm will shift from component thinking to whole systems thinking; from one screen to multiple touch-points. Most IoT systems will be connected to an app, but this will eventually evolve into a multi-interface world, some of it yet to be invented.

Designers must adapt to new technologies and paradigms or risk becoming irrelevant. Experiences that we design for are shifting dramatically—think AI, VR, AR, MR, IoT, and any combination thereof.

Utilizing live streaming data collected from millions of sensors, designers will be tasked with crafting experiences turning that data into something useful via an interface (a mobile app, smart TV, smart mirror, smartwatch, or car dashboard).

There will be huge opportunities for designers in the industrial Internet of Things. Organizations of all types and industries are investing heavily in this space, making IoT growth projections astronomical—to the tune of 50 billion connected devices by 2020.

Graphic by Clarice Technologies

IoT Is Already Here

An example of an IoT ecosystem available today is an internet connected doorbell that has a video camera, speaker, microphone, and motion sensor. When a visitor either rings the doorbell or comes near the front door, the owner receives a notification on their mobile via the app. The owner is able to communicate with the visitor via the speaker and microphone; they can let the visitor in via a remote controlled door lock or instruct a delivery person to leave the package somewhere safe.

SkyBell is a smart video doorbell that allows you to see, hear, and speak to the visitor at your door whether you’re at home,
at work, or on the go.

Another example is Nanit—a connected baby monitor with computer vision. It has real-time HD video with night vision, plus temperature and humidity sensors. It’s app gives you access to recorded and live HD video streams and smart notifications.

The IoT baby monitor Nanit

Implications for UX Design

These new experiences will require new modes of interaction—modalities, yet to be designed. Touch will evolve and expand. Gestures and physical body motion will become a more natural way of interacting with the digital world around us.

The IoT space is ready for exploration and designers need to investigate the potential human interaction models, how to design for them and find ways to unlock value. The focus will no longer be on singular experiences, but instead those that represent a broader ecosystem.

The Myo armband

Designers will become involved during every stage of the design process as it will become more about designing the entire product experience.

They will need to share creative authority during the whole development cycle and effectively influence the outcome of the end product, working in collaboration with an industrial designer—for example, on what that IoT doorbell looks like, how it works, the video and sound between the two parties, and the unlocking and locking of the door.

Five Critical Aspects for Designers to Consider in the IoT Era

1) Prepare for Evolving User Interactions

Google Home connects seamlessly with smart IoT devices so you can use voice to set the perfect temperature or turn down the lights.

Just as touchscreens introduced the pinch, finger scroll, and swipe, we’ll soon be introducing other ways of interacting with IoT systems. We can expect that hand gestures will continue to be used, but we’ll begin to see even more natural movements, such as tiny finger movements, as options for controlling devices in our environment.

Google is already preparing for a future where hand and finger movements will control things in our environment. Its Project Soli is an interaction sensor that uses radar for motion tracking of the human hand.

Radar-sensed hand and finger tracking (Google’s Project Soli)

IoT will no doubt be integrated with VR. With VR, our movements mimic those of the real world. Moving our heads up, down and around allows us to explore the VR world in a natural way. We’ll be able to control our environment through commonly used arm, hand, and finger movements.

Merging the VR experience with IoT opens up many new possibilities. Imagine an Amazon Go VR version—a self-serve grocery store in a VR world where a customer “walks in” and collects items into their virtual shopping cart by picking up their choices from the store shelves with natural hand movements.

For designers, feedback and confirmation are important considerations in this new paradigm as are many of the 10 Usability Heuristics for User Interface Design. Many of these “rules of thumb” will live on:

  • Visibility of system status
  • Match between the system and the real world
  • User control and freedom
  • Consistency and standards
  • Flexibility and efficiency of use
  • Help users recognize, diagnose, and recover from errors

Voice will play a huge role. Even the act of walking will dictate some level of control. As these new controls get more refined and are adopted by users, they will become the standard by which we interact in this space, whether a screen is present or not.

Using Amazon Alexa is as simple as asking a question. Just ask to play music, read the news, control your smart home, call a car.

What about other tactile, sensory or emotive inputs? How will emotions and physiology apply to this space? Designers must get ahead of this new paradigm or risk being left behind.

2) Rethink and Adapt to Interactions of the Future

It’s safe to say that, for example, things like the ‘menu’ in a user interface will in some shape or form always be a part of the experience. And just as we saw the introduction of the hamburger menu once mobile became ubiquitous, we’ll need to explore its evolution (or something similar) more extensively within IoT environments.

You need look no further than wearables like Samsung’s Gear S3 Watch to see how menu controls might evolve.

As we create the UIs of the future and new modes of interaction, we’ll need to make sure we keep in mind the users’ expectations. Designers will still need to follow usability and interaction standards, conventions, and best practices. By evolving from what is already known, the potential of new technologies can be harnessed—innovative UIs can be designed while still maintaining enough familiarity for them to be usable.

In the not-too-distant future, our daily lives will be imbued by micro-interactions as we move from device to device and UI to UI. There will not be just one, but many interfaces to interact with in a multitude of ways as people move through their day. An interaction may begin at home on a smart mirror, continue on a smartwatch down the street and on a mobile in a taxi, and then finish on a desktop at work. Continuity and consistency will play an important part.

As IoT continues to grow and evolve, we’ll encounter never-before-seen devices, new methods of interaction, and many varieties of associated UI. Those of us who design in these new environments will need to strike the right balance between the familiar and the new.

3) Design Contextual Experiences

IoT will achieve mass adoption by consumers and businesses when products are easily understood, affordable, and seamlessly integrated into their lives. This means we need to expand beyond personalization, and begin to infuse context into the experience.

Designing for context has the potential to permeate experiences, making them more meaningful and valuable.

As we design contextual, holistic experiences that will harness the power of IoT, we need to understand that being inconspicuous, far from being a bad thing, may be the goal. When the IoT product knows you, knows where you are, and knows what you need, it will only make itself present as needed. Things will adapt to people, and before we know it, become fully integrated into their daily lives.

As we design UIs for this new paradigm, we’ll need to understand that the human-computer interaction will be dynamic and contextual—and it will change constantly. At times we’ll need to allow for controls, while at others the systems will simply relay data with notifications that are useful in that moment. Each view will be intelligently displayed in the context of that very moment via the most appropriate channel and device. This contextual design would be micro-interaction driven, timely, and purposeful.

4) Design Anticipatory Experiences

One of the most promising characteristics of IoT is the ability to predict and adapt to situations. The old model of singular actions driving singular reactions is evolving at a rapid pace.

It’s going to be more about the output without much need for input.

“Magical experiences” will be born out of awesome combinations of AI, machine learning, computer vision, sensor fusion, augmented reality, virtual reality, IoT, and anticipatory design. Rumor has it, Apple is investing heavily into AR.

We will be surrounded by a growing number of intelligent IoT systems that will automatically do things for us in a predictive manner. For example, after using it a few times, the Nest learns our habits and adjusts intelligently without us needing to get involved.

We’ll begin to see systems that will become increasingly predictive. A simple gesture, movement, or word will initiate a series of useful events. There will be a chain of events that aren’t initiated by people at all, because the system will learn and optimize its actions based on a treasure trove of data. These events could be initiated by a person’s proximity, the time of day, environmental conditions (such as light, humidity, temperature, etc.), and previous behavioral data.

More than ever, deep user research will play an important role in designing experiences that are anticipatory and contextual. Defining personas, observing user behaviors, and empathy mapping—just to name a few UX techniques—will become crucial in crafting sophisticated user experiences that will feel almost “magical” to people.

5) Most Importantly, Make It Useful!

We’re seeing tremendous advancements in the field of IoT and the role that design will play in it is about empowering people in ways that were not possible before. The demand for deeply satisfying, quality experiences will increase with high expectations and standards.

While all of the above is important, we must never lose sight of the fact that it’s about making people’s lives easier. Designing “moments of delightful surprise” in this new paradigm—along with deep empathy for the user—is a skill designers will need to develop. As we look towards an even more connected digital future, connecting us to “intelligent things” in meaningful ways will allow for more efficient interaction, more productivity and, hopefully, happier lives.

Designers will need to design IoT-driven experiences that are contextual, helpful, and meaningful—optimized for people, not technologies.

“Experiences” will trump “things.”

The next step is for designers to become involved, and design the most seamless user experiences for the Internet of Things. Technologies must evolve into “optimizers of our lives.”

In other words, become useful for people.

This post was originally published on Toptal.com BY SERGIO ORTIZ – DESIGNER @ TOPTAL

The Industry Could Do Without Pixel Density And PPI Marketing

This post was originally published on Toptal.com BY NERMIN HAJDARBEGOVIC – TECHNICAL EDITOR @TOPTAL

A long, long time ago, I used to make a bit money on the side designing and printing business cards, along with ad materials and various documents. I was young and I needed the cash, and so did my buddy. Some of it went towards new hardware, while much of it was burned on 3-day barbecue binges, fuelled by cheap beer and brandy.

It didn’t take us long to realize the HP and Epson spec sheets, which proudly cited insanely high DPI for their printers and scanners, were as pointless as a Facebook share button on a kinky fetish site. So, we started using cheaper, older hardware other people didn’t want, and put the savings to good use: more meat and more booze. Fast forward twenty years, we still like to work hard and afford the finer things in life, and some of them were in part made possible by tech frugality. We didn’t buy into printer DPI poppycock back then, and we certainly couldn’t care less about display PPI today.

But your average consumer does. Oh yes, most of them still think they can actually see the difference between 440 and 550 pixels per inch (PPI) on their latest gadget. I might have missed a few things over the past two decades, but either the human eye has evolved to such a degree that all kids and many millennials have better vision than ace fighter pilots, or they’re just delusional.

I’ll go with delusional, or at least immature, because I figured out my eyes weren’t that good when I was 15.

In this post I will try to explain what led the industry astray, and what developers and designers need to keep in mind when developing for this new breed of device. You may notice that I have some strong opinions on the subject, but this is not supposed to be a bland, unbiased report on a purely technical issue. The problem was not created by engineers, you’ll have to get in touch with marketing to find the responsible parties.

How Did The PPI Lunacy Get Started Anyway?

One word: Apple.

Apple was the catalyst, but it actually turned out to be the good guy in the long run. The real culprit was the Android mob.

Apple introduced the Retina marketing gimmick with the launch of the iPhone 4, which boasted a small, hi-res display that blew the competition out of the water. In fact, it still looks quite good, and there is a good reason for that: Our eyes couldn’t tell the difference in 2010, and guess what, they can’t tell the difference in 2015.

Most people associate Retina displays with the density of the iPhone 4 display, which was 326 PPI (614,400 pixels on a 3.5-inch display). This is not inaccurate; saying that anything above 300 PPI can be considered a Retina display is more or less correct when talking about a mobile phone. The same metric cannot be applied to other devices because the typical viewing distance is different. Apple’s standard for mobile phones (at the time) was 10 to 12 inches, or 25 to 30 centimetres. The typical viewing distance for tablets is often cited at 15 inches or 38 centimetres, while desktop and notebook screens are viewed from about 20 inches (51 centimetres).

You can probably spot an issue here. Did you use your iPhone 4 at the typical 10-inch viewing distance? Maybe. But what about the iPhone 6 Plus, with two extra inches? Probably not. One of the good things about having an oversized phone is that you don’t need to bring it up to your face to view a notification or a message. Sometimes I don’t even pick my phone up, I just tap it next to my keyboard. Sometimes I pick it up and shoot off a short text without taking my wrist off the table, at a desktop keyboard distance, which is much closer to what Apple had in mind for notebook and desktop screens than mobiles or even tablets.

Being the youngest, insecure kids on the block, the Android gang quickly decided they had to do something about the iPhone 4. The response was swift and came in the form of 720p smartphones, with panels measuring 4.5 to 4.8 inches. When frustrated teens try to outdo someone else, they tend to overdo it, so a generation or two later, 1080p panels became mainstream, and they got bigger, 4.8 to 5.2 inches. The latest Samsung flagship, the Galaxy S6, boasts a 5.1-inch Quad HD Super AMOLED display, with a resolution of 2560 x 1440 and, wait for it, 577 PPI. There is just one thing: The panel uses a PenTile matrix, so many people would argue that it’s not really 2560×1440. Who cares, bigger numbers sell, right?

In Samsung’s defense, the Korean giant did create a use-case for such high resolution screens, sort of. It’s a simple and relatively inexpensive Virtual Reality solution dubbed Gear VR. Google Cardboard is more of the same, but Samsung seems to be taking the whole VR trend a bit more seriously.

The Invasion Of Oversized Androids

There was a bit of a problem with this approach. Like it or not, once you start chasing pixels, you are more likely to end up with a bigger screen. This means more backlighting, more GPU load, more juice, and a bigger battery. And, how many pixels does that leave us with anyway?

Well, for a 720p display, the phone has to render 921,600 pixels for every frame. That goes up to 2,073,600 for 1080p displays and culminates in 3,686,400 on a 1440p panel like the one used in the Galaxy S6. To go from 720p to 1440p, an application processor has to figure out what to do with four times as many pixels with each refresh cycle. This, obviously, isn’t very good for battery life, although Samsung did a great job, thanks to its efficient AMOLED technology and industry-leading 14nm SoC. However, the general trend was a vicious circle and many vendors simply keep making bigger screens, to hide the even bigger battery at the back.

Apple may have started the craze, but the real troublemakers are found elsewhere, hiding behind green droids.

I know what some of you are thinking: “But consumers want bigger phones!”

No. Consumers want whatever you tell them. That’s a fact.

In this case, the industry is telling them they want bigger phones, which are becoming so unwieldy that the next thing they’ll need are smartwatches, so they don’t have to drag their huge phablets from their pockets and purses. Convenient, is it not? Huge phones with insanely high pixel densities are a triumph of cunning marketing over sensible engineering.

Besides, people also want better battery life and we’re not seeing much progress in that department. The industry is tackling this issue with bigger batteries, some of which are almost powerful enough to jumpstart a car. I wonder what will happen when one of them, built by the lowest bidder, decides it’s had enough, springs a leak, or accidentally gets punctured?

That’s one of the reasons why I always found those tabloid headlines about smartphones stopping bullets so hilarious. Sure, it can happen under the right circumstances, but theoretically, you can also win the lottery and get struck by lightning when you go out to celebrate.

Instead Of Pointless PPI, Try Using PPD

While PPI has already been rendered a pointless metric, especially in the era of convertible, hybrid devices, the same cannot be said of pixels per degree (PPD). I think this is a much more accurate metric, and a more honest one at that.

Unlike PPI, which only deals with density, PPD takes viewing distance into account as well, so the same number makes sense on a smart watch and a 27-inch desktop display. Here is how it works, taking the good old iPhone 4 as an example.

The device has a 326 PPI display and it’s supposed to be used 10 inches from the eye, you end up with 57.9 PPD at the centre of the image, going up to 58.5 PPD at the edge. This is almost at the limit of 20/20 vision. However, if you have better than 20/20 eyesight, you could theoretically benefit from a higher resolution. However, on a backlit screen, covered by smooth and reflective glass, with a pinch of anti-aliasing, few people could ever tell the difference.

The PPD formula is simple: 2dr tan(0.5°), where d is the viewing distance and r is the display resolution in pixels per unit length.

Before you start breaking out your trusty TI calculators, here’s an online PPD calculator you can use.

So, let’s see what happens with a 5.5-inch 1080p phablet (iPhone 6 Plus) if we change the viewing distance. At the standard 10 inches, we end up with 71.2 PPD, however, at 11 inches the number goes up dramatically, to 78.1 PPD. At 12 inches it stands at 85 PPD, and at 13 inches we see 91.9 PPD.

Now let’s take a look at some cheap Androids with 720p panels. The visual density of a 5-incher at 10 inches is 52.1 PPD, but since I doubt the distance is realistic (if we are using the same distance for a 3.5-inch iPhone), let’s see what happens at 11 and 12 inches: we get 57.1 PPD and 62.2 PPD respectively. An entry-level 5.5-inch phablet with the same resolution has a density of 47.5 PPD at 10 inches, but at a more realistic 13 inches, we end up with 61.3 PPD. Granted, used at the same viewing distance as the iPhone 4, these numbers look bad, but few people will use these much bigger devices at the exact same distance.

So, why am I changing the viewing distance to begin with? As I pointed out earlier, that’s something most users do without even noticing, especially on Android phones. When I upgraded from a 4.7-inch Nexus to a 5-incher with capacitive buttons, I noticed a slight difference in the way I handled it. When I started playing around with a few 5.5-inch phablets and going back and forth between them, the difference became more apparent. Of course, it will depend on the user; someone might have the exact same viewing distance with a 4-inch Nexus S and a 6-inch Nexus 6, but I doubt it. This is particularly true of Android because the UI is more or less a one-size-fits-all affair and does not take into account the loads of different panel sizes out there. Since I am a fan of stock Android, the difference was even more apparent; Lollipop looks almost the same on a 4.7-inch Nexus 4 and a white-box 5.5-inch phablet.

Apple does it differently. Well, to be honest, Apple didn’t even have to do it until the launch of the iPhone 6 Plus, because it only offered one screen size, which allowed it to optimize the user experience in no time.

Why Pixel Density Matters

Why should developers and designers care about all this? It’s mostly a hardware thing, anyway. Developers have nothing to do with this mess, just let Google, Samsung, Motorola and Apple sort it out.

Devs and designers aren’t part of the problem, but they can be part of the solution.

Like it or not, we have to waste perfectly good clock cycles and milliamps on these power-hungry things. Unfortunately, apart from optimisation, there’s not much developers can do. All mobile apps should be optimised for low power anyway, so that doesn’t make a difference. Designers can’t take into account every single resolution on every single screen size when they polish their designs. At the exact same resolution, they might have to use virtually no anti-aliasing, or moderate anti-aliasing, or go all out with some really aggressive edge-softening. It all depends on the type of device and screen size, not the resolution.

This is trickier than it sounds. Using slightly different settings for 5-inch and 5.5-inch devices with standard resolution screens sounds easy enough, but it would only address one side of the problem. Will a tall, 40-year-old Swede use a 5.5-inch 1080p phone at the same eye distance as a 14-year-old Taiwanese teen chatting with her girlfriends? Of course not.

This, among other things, is why I’ve come to despise PPI. It’s become a useless marketing number; it does not provide consumers with accurate information when they purchase a new device, and from a developer’s perspective, the PPI arms race is doing more harm than good. It’s not making hardware better in a noticeable way, yet it’s making it more expensive and less efficient. It is no longer improving the user experience, either, and in some cases it is even degrading it.

A few years ago, mobile designers had to take into account a few standard Apple resolutions and a handful of Android resolutions and screen sizes. Now, they have to deal with Apple products in more aspect ratios, resolutions and pixel densities. Android, due to its trademark fragmentation, poses a lot more challenges than Apple or Windows (Phone). While the trend has been to inch towards bigger screens and higher resolutions, a lot of Android devices still ship with 4.x-inch screens, and sub-720p resolutions. Add to that a host of legacy devices, and you end up with a pool of green goo.

Ten Easy Ways Of Wrecking User Experience On High-Res Devices

Let’s take a look at how high PPI displays have a negative impact on user experience, starting with hardware and performance issues.

  • Heavy websites are too demanding
  • Battery life and durability may take a substantial hit
  • Effect on storage, bandwidth, load times
  • Games that would otherwise run smoothly become jerky
  • SoC may be throttled, refresh rate lowered

Websites with a lot of demanding content, such as elaborate responsive sites, can be problematic even on underpowered desktops, let alone mobile devices. Five years ago, most of us relied on 1080p desktop displays and the iPhone 3GS had a 480×360 pixel display. Today, most people still use 1080p on desktop platforms, but at the same time they buy 1080p smartphones on the cheap. For some reason, people think it’s OK to place the same strain on a desktop and a $200 smartphone that has a fraction of the processing power. Toptal Software Engineer Vedran Aberle Tokic authored an excellent post dealing with problems caused by responsive websites on mobiles, so please check it out for more details.

Of course, as soon as you start pushing a smartphone or tablet to its limits, battery life takes a massive hit. So, now we have bigger batteries in our phones, and more powerful chargers, and wireless charging, and powerbanks; and we still run out of juice by sundown. This is not just an inconvenience; the battery has to endure more charging cycles, it degrades over time, and now that most smartphones ship with integrated batteries, this poses a problem for the average consumer.

Who cares if your app or website look marginally better than your competitors if they end up draining the battery faster? And, what if your gorgeous hi-res designs end up loading slower, taking up more storage, and sucking more bandwidth than the competition?

Games and other graphically demanding applications might benefit from higher resolutions, but they can also experience nasty performance issues. Casual games that don’t stress the GPU to its limits can look muchbetter in very high resolutions, and they can be smooth even on underpowered hardware. However, 3D games, even casual ones, are a different story.

I am no gamer, and it’s been more than a decade since I was hooked on a game (Civilization, of course). However, I recently discovered World of Tanks Blitz for Android, and experienced a relapse, so here is some anecdotal evidence.

The game is easy to master, fast-paced, doesn’t require wasting hours per match, and it combines my love of history, technology, trolling people and blowing stuff up. Since I never install games on my phone, I tried it out on a 2048×1536 Android tablet, powered by a 2.16GHz Intel Atom Z3736F processor with 2GB of RAM. UX is good; after all, this is a popular game from a big publisher. Prior to the last update, the system would set the graphics preferences automatically and I was happy with overall performance, about 30 FPS in most situations (dipping to 20+ in some situations). However, the last update allowed me to tweak graphics options manually, and then I got to see what I was missing out: much better water shaders, dynamic shadows, fancier particle effects and so on. I tweaked the settings a bit, but had to trade a lot of eye candy for performance.

With that particular hardware platform, the game would have been able to run at maxed out quality settings at 1024×768, at a substantially higher frame rate. In other words, my user experience would be better on a cheaper and slower device, with just one quarter of the pixels. Changing the resolution would obviously solve everything, but it can’t be done.

Reducing the load would also allow devices to run smoother for longer periods of time, without having to throttle their processors, automatically reduce screen brightness and so on. In some cases, hardware vendors even opted for lower screen refresh rates to preserve battery life and reduce load.

This brings us to aesthetics, and ways of messing up UX on hi-res devices that have nothing to do with performance issues:

  • Reliance on rasterised vs. vector graphics
  • Use of resampled images
  • Viewing old low-res content
  • Using legacy apps
  • Inadequate or overly aggressive anti-aliasing

Although vector graphics play a prominent role in design, we still have to rely on rasterised images for a lot of stuff. Vector graphics are more or less useless for everyday content delivery. For example, when developers create a simple news reader app, it might look magnificent, even on a low budget, on all devices. However, if the content provider doesn’t do a good job, the sleek and sharp design will be ruined, with inadequate visual content, such as low resolution images and video, compression artefacts, bad anti-aliasing, and so on. If forced to reuse older images, they may be tempted to resample them, making an even bigger mess.

The same goes for old content and apps. Not all websites look better on high resolution displays; not all websites are regularly updated to take advantage of new hardware. Ancient CSS does not look good on high PPI devices. Older apps can also misbehave, or end up with a broken UI.

Anti-aliasing can be another problem, but one of the ways of making sure it’s spot on is to rely on PPD rather than PPI. Of course, there is only so much developers and designers can do, especially if their products rely on third-party content, uploaded and maintained by the client.

Things Will Get Worse Before They Get Better

During any period of rapid tech evolution, teething problems are bound to occur. The fast pace of smartphone development and adoption has created numerous opportunities, along with more challenges for developers.

This high resolution race won’t go on for much longer; it’s impractical and becoming pointless. High-res screens are already shipping on low-cost devices, and the trend is going to slow down before it comes to a grinding halt. In the end, we will end up with a few standard resolutions from $200 to $1000 devices, and that’s it. There is a lot of room for improvement on other fronts, specifically, battery life and overall user experience.

Still, I think it’s a good idea to keep an eye on market trends and keep track of sales figures, just to be one step ahead and to know what to expect. It’s almost as important as tracking the spread of different OS versions and platform market share.

Unfortunately, there is not much developers and designers can do to tackle many of these issues. In my humble opinion, the best course of action is to keep clients in the loop, make them aware of potential issues beyond your control and issue clear guidelines on content that should be added to websites and mobile apps.

This post was originally published on Toptal.com BY NERMIN HAJDARBEGOVIC – TECHNICAL EDITOR @TOPTAL

Busting the Top 5 Myths About Remote Workers

This post was originally published on Toptal.com by  SCOTT RITTER – SENIOR SUPERVISOR @TOPTAL

Picture this…

You meet your friend Jeff for lunch. Jeff’s been managing the development of a new product and it’s about to be released. He’s pumped.

But the product release isn’t the main thing that’s got Jeff so excited. Instead, Jeff can’t stop raving about his new hire, Luis.

Jeff says Luis is the best software engineer, he’s ever had on his team. Luis doesn’t just hit targets, others can’t, he creates and hits targets others don’t even see. He’s virtually always available, always accountable, and brings more to the table than anyone Jeff’s ever worked with. And Luis just ‘get’s it’.

As Jeff goes on, you’re blown away. But you’re really only half listening at this point, as you picture what it would be like to have someone like Luis on your own team…

But then, suddenly, Jeff says something that shatters the idyllic picture in your mind:

“And get this”, Jeff adds as he leans forward, “Luis is remote”.

What? Remote? How can that be? Everyone knows that hiring remote software developers is fraught with challenges. It’s hard enough if they’re relatively local. It’s even tougher if they’re remote. And it can be nothing short of a disaster if they’re overseas. Luis can’t possibly be that good.

Sorry dude. Your stereotypes have just been shattered.

Wow. You don’t know what to think.

Reality check

In fairness, the remote employment stereotypes you’ve had in your mind until now, aren’t entirely unfounded, especially when it comes to offshore workers. Recent years have seen an increasing flood of cutthroat overseas development shops promise extremely inexpensive services and then fail to deliver by even the most basic of professional standards. With so many negative experiences, a growing stigma has developed that all offshore workers are low quality, undependable, and unable to communicate effectively.

To be sure, there certainly are pitfalls to be aware of and to avoid when hiring remote workers, especially overseas. But if the negative stereotype of remote hiring were as across-the-board-true as some would have you believe, then how does one explain the dramatic 80% increase in the remote workforce from 2005 to 2012? The simple fact is that working and hiring remotely is a growing and increasingly successful paradigm.

Case in point:  GitHub. GitHub hosts over 10 million code repositories and recently received $100 million in Series A funding. Not too shabby. And guess what?  Over two-thirds of their employees are remote, distributed all across the globe. Seems safe to assume that the remote employment model is working really well for them.

And then there’s Dell – the IT mega-vendor with annual revenues that were in excess of $62 billion in 2012. Dell must be highly confident in the viability of remote work as part of their core business model as well, having recently announced a goal of having half of its employees work remotely within 6 years (by 2020).

Clearly, there must be more to hiring software engineers and working remotely than traditional stereotypes would lead one to believe. In truth, when employed properly and intelligently, a staffing strategy that incorporates remote team members can be a huge win for everyone involved. Some of the best talent in the world telecommutes and, whether we want to admit it or not, a growing percentage of that talent is overseas.

With that in mind, this post pulls the rug out from under 5 of the most prevalent myths about remote workers, with a specific focus on software developers.

MYTH #1: You get what you pay for.
REALITY: Sometimes you do, sometimes you don’t. It all depends.

Many growing companies and startups are realizing that the very best developers may not be located within commuting distance of their offices. Even if they are, their rates may be ridiculously disproportionate to their skill levels. A local hire in no way guarantees a wise investment.

A key problem with hiring overseas team members is that most employers go about the process of selecting and hiring these remote workers entirely wrong. When most employers turn their sights to hiring overseas, they make the penny-wise-and-pound-foolish decision to exploit the differential in labor costs to its fullest and hire the cheapest labor they can find. Some even fool themselves into thinking they’ve avoided this trap by hiring a not-quite-as-cheap software developer, but they still rarely find in the long run that their time or money was well spent.

When hiring a remote worker, the primary focus (as with any hiring) needs to remain on quality. Any cost savings that result should merely be viewed as an added bonus. Decisions in life that are solely based on economics rarely prove to be wise ones, and the same most certainly holds true for hiring remote software engineers.

Employ the best, not the cheapest, and you’ll be the beneficiary of remote team members who are nothing short of stellar.

MYTH #2: Offshore software developers aren’t as good as domestic software developers.
REALITY: As a generalization, simply not true.

Ever worked with anyone from the U.S. who was fired for underperforming? I would venture to say that we all have. There are great and terrible developers in Atlanta, Chicago, and New York, just as there are in Argentina, Portugal, and Hungary. Quality is not a deterministic function based on geographic location.

And not anything against the good ’ole USA but we actually ranked 30th worldwide (out of 65 countries tested) for math in 2012, and that’s down from a ranking of 25th worldwide in 2009. Not a good sign. And this is particularly significant with regard to hiring software developers, since algorithmic aptitude can often be an essential underpinning to effective software engineering. Don’t get me wrong, we have some of the very best developers here in the U.S., but to say that top developers from other countries are not as good as those in the U.S. would simply not be true. Top developers are who they are because they stay up-to-date on leading-edge technology and because of their commitment to technical excellence, not because of where they are located geographically.

Remember the following truisms:

  • A good remote developer is better than a bad local developer.
  • A great remote developer is better than a good local developer.
  • A top developer is a top developer, regardless of where they are located.

MYTH #3: Differences in culture, language, and time zone are serious problems.
REALITY: They can be, which is why it’s so important to hire wisely.

Yes, the potential for cultural, language, and time zone challenges do exist when hiring a remote web developer or software engineer, but they can certainly be surmounted through a highly exacting recruiting process that centers around an uncompromising commitment to excellence.

Hyam Singer’s post In Search of the Elite Few discusses a methodology for finding and hiring the best software engineers in the industry. That approach is no less applicable to remote or overseas candidates than it is to those who are local.

So with that in mind, let’s examine the cultural, language, and time zone challenges of remote hiring in more detail:

  • Culture. In the context of hiring, when people refer to culture, they often really mean work ethic and moral standards. During the interview process, posing hypothetical “ethical and moral dilemmas” – that don’t have black-and-white, right and wrong answers – is a great way of gauging a candidate’s business ethics and moral compass.
  • Language. There’s no denying that the ability to communicate clearly with our colleagues is essential. Problematic misunderstandings can easily arise from misinterpreted subtleties in language, a problem which is exacerbated when working with someone remotely. It is therefore crucial to thoroughly evaluate communication skills when selecting a remote team member, especially if his or her native language is something other than English. An otherwise stellar team player can prove to be more of a liability than an asset due to an inadequate command of the language.
  • Time Zone. First of all, the U.S. shares its four time zones with numerous other “offshore” cities that have high concentrations of talented software engineers, so “offshore” and “time difference” are not necessarily synonymous. Moreover, as long as the time of day at the remote location is within 5 or 6 hours of your time zone (which is true in a large percentage of cases), you’ll always have at least a few hours of overlapping “at work” time each day.

MYTH #4: Remote developers won’t integrate well with your team.
REALITY: If they’re good, they’ll bend over backwards to prove themselves to be stellar team players.

A sharp developer is not only sharp technically, but is also socially and professionally astute, and is therefore well aware of the reservations and skepticism that you may have.

Moreover, just as it is hard for top-notch companies to find superior developers, it is often hard for top-notch developers to find superior companies to work for, especially remotely.

For these reasons, a high-caliber remote developer will often work that much harder to gain your trust and respect. Show her that trust, grant her that respect, and you’ll have more than a team player, you’ll have someone who’s unflinchingly loyal and committed to the success of your project.

MYTH #5: Qualified remote software developers are next to impossible to find.
REALITY: Not if you know where and how (and where and how not!) to look for them.

Well, if you get an unsolicited email from a company touting offshore resources, and it contains a whole slew of grammatical and typographical errors, that’s probably a pretty good indicator of a source that you don’twant to turn to for offshore hires. Remember, we’re looking for quality.

The sources for remote software engineering team members basically fall into three categories:

  • Offshore body shops. This is, unfortunately, the majority of what’s out there and much of the reason for the negative stereotypes that exist. They’re the ones we referred to earlier that promise extremely inexpensive service and then fail to deliver by even the most basic of professional standards.  Avoid them. Like the plague. Period.
  • Independent consultants. This is where the needle-in-a-haystack challenge comes into play. The quality can vary widely. Some can can be cream-of-the-crop, but many tend to be inferior. The inferior ones are usually fairly easy to detect based on the low caliber of their communication skills or the level of desperation that they exude. The premier ones, on the other hand, can be elusive and hard to find, typically finding work through their own network of contacts. One negative with the best of these remote software developers is that they sometimes overcommit (to avoid a dry spell) and may fall behind on some of their deadlines as a result. After all, it’s hard to be your own marketing department. But that said, if you find one of these aces, and if they make your project a priority, they can be a tremendous asset to your team.
  • International freelance networking sites. A number of international freelance networking sites have emerged in recent years, intended to serve as a marketplace to connect customers and remote software engineering resources around the globe. Many independent consultants utilize these networking sites as a means of augmenting their efforts to market their services. As a result, these sites offer customers a more centralized means of accessing global technical resources. However, these networks themselves are really just focused on providing a marketplace for technical services, rather than focusing on (or vouching for) the quality of the individual services offered.  Accordingly, the challenge here remains that of quality; while some of the resources available through these networks are top-notch, the majority tend to be substandard.To be sure, global networking is a greater challenge – potentially a much greater challenge – than local or domestic networking. But as has been discussed throughout this post, high quality technical resources do exist around the globe. By identifying a core group of stellar software engineers in key remote locations, and then using them as the nucleus from which to build out an ever-growing A+ network, one can realize the benefits of (and offer) a globally-distributed workforce, while minimizing the downside. The company I work for, Toptal, has done precisely that, and is in fact employing this business model with great success.

Wrap-up

Great developers live where great developers live. It’s that simple. Many are here in the U.S. Many are in South America. Many are in the Ukraine. No country or region has a monopoly on remote developers.

The challenge, whether domestically or abroad, is to navigate through the masses to identify the elite few. International high-end developer networks are emerging as a highly effective means of finding and tapping into these valuable resources across the globe. Indeed, great developers tend to gravitate toward great developers, wherever they may be.  And that’s a fact.

This post was originally published on Toptal.com by  SCOTT RITTER – SENIOR SUPERVISOR @TOPTAL

 

Spectacular Crowdfunding Fails And Their Impact On Entrepreneurship

This post was originally published on Toptal.com BY NERMIN HAJDARBEGOVIC – TECHNICAL EDITOR @ TOPTAL. Click here to read. Republished with permission.

Before I proceed, let me make it absolutely clear that I have nothing against crowdfunding. I believe the basic principle behind crowdfunding is sound, and, in a perfect world, it would boost innovation and provide talented, creative people with an opportunity to turn their dreams into reality.

Unfortunately, we live in the real world, and therefore it’s time for a reality check:

Reality /rɪˈalɪti/
Noun

  1. The state of things as they actually exist.
  2. The place where bad crowdfunded ideas come to die.

While most entrepreneurs may feel this mess does not concern them because they don’t dabble in crowdfunding, it could have a negative impact on countless people who are not directly exposed to it:

  1. We are allowing snake oil peddlers to wreck the reputation of crowdfunding and the startup scene.
  2. Reputational risks extend to parties with no direct involvement in crowdfunding.
  3. By failing to clean up the crowdfunding scene, we indirectly deprive legitimate ideas of access to funding and support.
  4. When crowdfunded projects crash and burn, the crowd can quickly turn into a mob.

But Wait, Crowdfunding Gave Us Great Tech Products!

Indeed, but I am not here to talk about the good stuff, and here is why: For every Oculus Rift, there are literally hundreds of utterly asinine ideas vying for crowd-cash.

Unfortunately, people tend to focus on positive examples and overlook everything else. The sad truth is that Oculus Rift is a bad example of crowdfunding, because it’s essentially an exception to the rule. The majority of crowdfunding drives doesn’t succeed.

How did a sound, altruistic concept of democratizing entrepreneurship become synonymous with failure? I could list a few factors:

  • Unprofessional media coverage
  • Social network hype
  • Lack of responsibility and accountability
  • Lack of regulation and oversight

The press should be doing a better job. Major news organizations consistently fail to recognize impossible ideas, indicating they are incapable of professional, critical news coverage. Many are megaphones for anyone who walks through the door with click bait.

The press problem is made exponentially worse by social networks, which allow ideas to spread like wildfire. People think outlandish ideas are legitimate because they are covered by huge news outlets, so they share them, assuming the media fact-checked everything.

Once it becomes obvious that a certain crowdfunding initiative is not going to succeed, crowdfunding platforms are supposed to pull the plug. Sadly, they are often slow to react.

Crowdfunding platforms should properly screen campaigns. The industry needs a more effective regulatory framework and oversight.

Realistic Expectations: Are You As Good As Oculus Rift?

Are you familiar with the “Why aren’t we funding this?” meme? Sometimes the meme depicts awesome ideas, sometimes it shows ideas that are “out there” but entertaining nonetheless. The meme could be applied to many crowdfunding campaigns with a twist:

”Why are we funding this?”

This is what I love about crowdfunding. Say you enjoyed some classic games on your NES or Commodore in the eighties. Fast forward three decades and some of these games have a cult following, but the market is too small to get publishers interested. Why not use crowdfunding to connect fans around the globe and launch a campaign to port classic games to new platforms?

You can probably see where I’m going with this: Crowdfunding is a great way of tapping a broad community in all corners of the world, allowing niche products and services to get funded. It’s all about expanding niche markets, increasing the viability of projects with limited mainstream appeal.

When you see a crowdfunding campaign promising to disrupt a mainstream market, that should be a red flag.

Why? Because you don’t need crowdfunding if you have a truly awesome idea and business plan with a lot of mainstream market appeal. You simply need to reach out to a few potential investors and watch the money roll in.

I decided against using failed software-related projects to illustrate my point:

Most people are not familiar with the inner workings of software development, and can’t be blamed for not understanding the process.
My examples should illustrate hype, and they’re entertaining.

That’s why I’m focusing on two ridiculous campaigns: the Triton artificial gill and the Fontus self-filling water bottle.

Triton Artificial Gill: How Not To Do Crowdfunding

The Triton artificial gill is essentially a fantasy straight out of Bond movies. It’s supposed to allow humans to “breathe” underwater by harvesting oxygen from water. It supposedly accomplishes this using insanely efficient filters with “fine threads and holes, smaller than water molecules” and is powered by a “micro battery” that’s 30 times more powerful than standard batteries, and charges 1,000 times faster.

Sci-Tech Red Flag: Hang on. If you have such battery technology, what the hell do you need crowdfunding for?! Samsung, Apple, Sony, Tesla, Toyota and just about everyone else would be lining up to buy it, turning you into a multi-billionaire overnight.

Let’s sum up the claims:

  • The necessary battery technology does not exist.
  • The described “filter” is physically impossible to construct.
  • The device would need to “filter” huge amounts of water to extract enough oxygen.

Given all the outlandish claims, you’d expect this sort of idea to be exposed for what it is within days. Unfortunately, it was treated as a legitimate project by many media organizations. It spread to social media and eventually raised nearly $900,000 on Indiegogo in a matter of weeks.

Luckily, they had to refund their backers.

Fontus Self-Filling Water Bottle: Fail In The Making

This idea doesn’t sound as bogus as the Triton, because it’s technically possible. Unfortunately, this is a very inefficient way of generating water. A lot of energy is needed to create the necessary temperature differential and cycle enough air to fill up a bottle of water. If you have a dehumidifier or AC unit in your home, you know something about this. Given the amount of energy needed to extract a sufficient amount of water from the air, and the size of the Fontus, it might produce enough water to keep a hamster alive, but not a human.

While this idea isn’t as obviously impossible as the Triton, I find it even worse, because it’s still alive and the Indiegogo campaign has already raised about $350,000. What I find even more disturbing is the fact that the campaign was covered by big and reputable news organizations, including Time, Huff Post, The Verge, Mashable, Engadget and so on. You know, the people who should be informing us.

I have a strange feeling the people of California, Mexico, Israel, Saudi Arabia and every other hot, arid corner of the globe are not idiots, which is why they don’t get their water out of thin air. They employ other technologies to solve the problem.

Mainstream Appeal Red Flag: If someone actually developed a technology that could extract water from air with such incredible efficiency, why on Earth would they need crowdfunding? I can’t even think of a commodity with more mainstream appeal than water. Governments around the globe would be keen to invest tens of billions in their solution, bringing abundant distilled water to billions of people with limited access to safe drinking water.

Successful Failures: Cautionary Tales For Tech Entrepreneurs

NASA referred to the ill-fated Apollo 13 mission as a “successful failure” because it never executed a lunar landing, but managed to overcome near-catastrophic technical challenges and return the crew to Earth.

The same could be said of some tech crowdfunding campaigns, like the Ouya Android gaming console, Ubuntu Edge smartphone, and the Kreyos Meteor smartwatch. These campaigns illustrate the difficulty of executing a software/hardware product launch in the real world.

All three were quite attractive, albeit for different reasons:

  • Ouya was envisioned as an inexpensive Android gaming device and media center for people who don’t need a gaming PC or flagship gaming console.
  • Ubuntu Edge was supposed to be a smartphone-desktop hybrid device for Linux lovers.
  • The Kreyos Meteor promised to bring advanced gesture and voice controls to smartwatches.

What went wrong with these projects?

  • Ouya designers used the latest available hardware, which sounded nice when they unveiled the concept, but was outdated by the time it was ready. Soft demand contributed to a lack of developer interest.
  • The Ubuntu Edge was a weird, but good, idea. It managed to raise more than $12 million in a matter of weeks, but the goal was a staggering $32 million. Although quite a few Ubuntu gurus were interested, the campaign proved too ambitious. Like the Ouya, the device came at the wrong time: Smartphone evolution slowed down, competition heated up, prices tumbled.
  • The Kreyos Meteor had an overly optimistic timetable, promising to deliver products just months after the funding closed. It was obviously rushed, and the final version suffered from severe software and hardware glitches. On top of that, demand for smartwatches in general proved to be weak.

These examples should illustrate that even promising ideas run into insurmountable difficulties. They got plenty of attention and money, they were sound concepts, but they didn’t pan out. They were not scams, but they failed.

Even industry leaders make missteps, so we cannot hold crowdfunded startups to a higher standard. Here’s the difference: If a new Microsoft technology turns out to be a dud, or if Samsung rolls out a subpar phone, these failures won’t take the company down with them. Big businesses can afford to take a hit and keep going.

Why Crowdfunding Fails: Fraud, Incompetence, Wishful Thinking?

There is no single reason that would explain all crowdfunding failures, and I hope my examples demonstrate this.

Some failures are obvious scams, and they confirm we need more regulation. Others are bad ideas backed by good marketing, while some are genuinely good ideas that may or may not succeed, just like any other product. Even sound ideas executed by good people can fail.

Does this mean we should forget about crowdfunding? No, but first we have to accept the fact that crowdfunding isn’t for everyone, that it’s not a good choice for every project, and that something is very wrong with crowdfunding today:

  • The idea behind crowdfunding was to help people raise money for small projects.
  • Crowdfunding platforms weren’t supposed to help entrepreneurs raise millions of dollars.
  • Most Kickstarter campaigns never get fully funded, and successful ones usually don’t raise much money. One fifth of submitted campaigns are rejected by Kickstarter, while one in ten fully-funded campaigns never deliver on their promises.
  • Even if all goes well, crowdfunded products still have to survive the ultimate test: The Market.

Unfortunately, some crowdfunding platforms don’t appear eager to scrutinize dodgy campaigns before they raise heaps of money. This is another problem with crowdfunding today: Everyone wants a sweet slice of the crowdfunded pie, but nobody wants a single crumb of responsibility.

That’s why I’m no optimist; I think we will keep seeing spectacular crowdfunding failures in the future.

Why Nobody Cares About Your Great Idea

A wannabe entrepreneur starts chatting to a real entrepreneur:

“I have an awesome idea for an app that will disrupt…”

“Wait. Do you have competent designers, developers, funding?”

“Well, not yet, but…”

“So what you meant to say is that you have nothing?”

This admittedly corny joke illustrates another problem: On their own, ideas are worthless. However, ideas backed up by hard work, research, and a team of competent people are what keeps the industry going.

Investors don’t care about your awesome idea and never will. Once you start executing your idea and get as far as you can on your own, people may take notice. Investors want to see dedication and confidence. They want to see the prototypes, specs, business plans, research; not overproduced videos and promises. If an individual is unwilling or incapable of making the first steps on their own, if they can’t prove they believe in their vision and have the know-how to turn it into reality, then no amount of funding is going to help.

Serious investors don’t just want to see what people hope to do; they want to see what they did before they approached them.

Why not grant the same courtesy to crowdfunding backers?

This post was originally published on Toptal.com BY NERMIN HAJDARBEGOVIC – TECHNICAL EDITOR @ TOPTAL. Click here to read. Republished with permission.