What is Link Retargeting?

CF
Follow us

CF

ContentFlo is a knowledge property that captures the trends in digital, technology and marketing from across the industry.

ContentFlo shares perspectives and insights that matter to enterprises in the digital economy.
CF
Follow us

This post was orgininally published by Serge Salager . Serge is the CEO and Founder of RetargetLinks. The post can be read here

To put it simply, link retargeting is just like traditional ad retargeting. The key difference is that instead of having to send customers to your site, you can display retargeted ads based on the link they click. And it can be any link – not just to your website.

Link retargeting really allows you to take your content, social, email, or even AdWords marketing farther! We’ve put together five key tips you need to know to get started.

Can I shorten a link to any content?

The short answer (pun intended!) is yes! You can shorten any link on any platform to any site. To make the most of your efforts, we recommend making sure the content is relevant to your brand. This way, you’ll improve the odds that your target customer will click.

As an example, Pampers is using link retargeting to target ‘first-time moms’. They chose to direct their audience to a relevant article in Parents Magazine: “How to prepare for your first baby?”

Step 1: The advertiser posts “retarget” short links through social media, email, press or influencer platforms.

Step 2: The service will retarget only those that click on the link. In this case, it will show 150,000 banner ads to 10,000 people.

Can I use link retargeting on a standard “long” link?

Link retargeting is not possible with a standard link. This is because it requires specialized technology that allows the link to place a retargeting cookie on the computer of the person who clicks.

We’ve developed this software to make it really easy for you to turn your standard links into retargeting short links. All it takes is the click of a button in your RetargetLinks dashboard.

Can I customize my short links?

Absolutely. Our short links are quite flexible, to allow you to have them appear exactly how you’d like.

You can customize the default re.tc links (this is a link to our patent for example: re.tc/patent). You can also request a short vanity URL (su.tt or jmpr.rocks are examples from some of our clients).

Note: In the vanity URL example, you’ll need to buy the short domain name first and then follow the instructions provided in your dashboard to start link retargeting using your own short links.

When running AdWords campaigns, you’re actually able to hide the short link within your AdWords ad link (see more here on how to set up a search retargeting campaign).

How many ads will be shown and where?

Our default volume cap (the maximum ads we show per person) is 15. This displays up to 9 ads per week, 5 ads per day, and 2 ads per hour, depending on the audience. We do this to keep your brand top of mind over a two to three week period, following the launch of your campaign.

We display banner ads just like a traditional retargeting tool. Your ads will display in Google AdX, OpenX, Rubicon, AppNexus and other real-time bidding platforms across premium online publications like Vogue, Elle, Fortune, FastCompany, Wall Street Journal and all other ad-supported sites.

How do I know if my link retargeting campaign is working?

There are three key metrics we use to determine whether a link retargeting campaign is working. They are: link clicks, ad clicks, and conversions. We’ve included some steps here to show you how to measure these metrics.

Step One – Measure Your Link Clicks

Make sure your link retargeting campaign is reaching your target audience. Emails, online articles, social media posts, newsletters, press releases, and even Google AdWords are all ways for you to share your short links.

If you’re just starting or are looking to reach out to more targets, we recommend using RetargetLinks as a prospecting tool. You can do this by boosting posts on social media channels, or paying for ads in Google AdWords.

Then, you can tell if your campaign is working by looking at the number of link clicks on your Links Dashboard.

If you’re sharing the right content to the right audience on the right channels, you’ll have a lot of clicks. The example you’ll see next is from a campaign run by the team at Traction Conference. As a result of their RetargetLinks content campaign, they had 85,138 clicks (58,296 unique) from 873 links shared via their email newsletter (direct), Twitter and Facebook pages.

Step Two – Measure Your Ad Clicks

The second indication to help you measure your campaign is to look at the number of ad clicks on your Ads Dashboard.

When you display relevant and compelling banner ads, you’ll catch the attention of your targets and encourage them to click to find out more.

Helpful tip: banner ads are most effective when they have consistent branding, simple messaging, a clear call-to-action (CTA), and even some element of animation. 

In the above example, Traction Conference managed to display 161,340 retargeting ads to most of the 58,138 people that clicked on their short links. Out of those, 422 people clicked for a 0.26% click-through rate. Note that this is three times the 0.10% average for banner ad performance!

Step Three – Measure Your Conversions

The final indication of performance is to look at the number of people that land on your page and ultimately the number of those that convert by purchasing your product or subscribing to your service.

In the case of Traction Conference, 947 people landed on the marketing page and 186 actually went on to purchase a ticket for the conference. The team was able to achieve a 20% conversion rate. Note that this is 10 times greater than a typical retargeting ad conversion rate.

Summary

Hopefully if you’ve made it this far down the post, you have a better idea of how link retargeting works. Now you are ready to make the most out of your campaigns.

If you have any questions, don’t hesitate to drop us a line as we’d love to hear from you! If you’re ready to get started, click here to create your first shortened retarget link!

This post was orgininally published by Serge Salager . Serge is the CEO and Founder of RetargetLinks. The post can be read here

How to Design Delightful Experiences for the Internet of Things

CF
Follow us

CF

ContentFlo is a knowledge property that captures the trends in digital, technology and marketing from across the industry.

ContentFlo shares perspectives and insights that matter to enterprises in the digital economy.
CF
Follow us

This post was originally published on Toptal.com BY SERGIO ORTIZ – DESIGNER @ TOPTAL

One of the next technological revolutions on the horizon is the emerging platform of the Internet of Things (IoT). The core of its promise is a world where household appliances, cars, trucks, trains, clothes, medical devices and, much more would be connected to the internet via smart sensors capable of sensing and sharing information.

As its presence in our lives grows, the Internet of Things (IoT) will be fundamental to most things we see, touch, and experience—UX design will play an important, if not essential, role in that advancement.

From healthcare to transportation—from retail to industrial applications, companies are constantly searching for new ideas and solutions in order to create new experiences, deliver greater value to customers, and make people’s lives easier and more efficient.

If you think you don’t know what IoT is, you’ve probably already experienced it and just didn’t realize what it was. Home automation hubs like Google’s Home and Amazon’s Alexa, the Nest Learning Thermostat, Philips Hue lightbulbs, Samsung SmartThings, Amazon Go, and fridges that monitor their contents all fall into the IoT category.

Flo, smart residential water systems that monitor water efficiency, leaks, and waste

The next wave of IoT will connect millions of devices across the globe and make homes, cities, transportation, and factories smarter and more efficient. Real-time data produced by hundreds of IoT sensors will change the way businesses operate and how we see the world.

The skills needed in this new paradigm will shift from component thinking to whole systems thinking; from one screen to multiple touch-points. Most IoT systems will be connected to an app, but this will eventually evolve into a multi-interface world, some of it yet to be invented.

Designers must adapt to new technologies and paradigms or risk becoming irrelevant. Experiences that we design for are shifting dramatically—think AI, VR, AR, MR, IoT, and any combination thereof.

Utilizing live streaming data collected from millions of sensors, designers will be tasked with crafting experiences turning that data into something useful via an interface (a mobile app, smart TV, smart mirror, smartwatch, or car dashboard).

There will be huge opportunities for designers in the industrial Internet of Things. Organizations of all types and industries are investing heavily in this space, making IoT growth projections astronomical—to the tune of 50 billion connected devices by 2020.

Graphic by Clarice Technologies

IoT Is Already Here

An example of an IoT ecosystem available today is an internet connected doorbell that has a video camera, speaker, microphone, and motion sensor. When a visitor either rings the doorbell or comes near the front door, the owner receives a notification on their mobile via the app. The owner is able to communicate with the visitor via the speaker and microphone; they can let the visitor in via a remote controlled door lock or instruct a delivery person to leave the package somewhere safe.

SkyBell is a smart video doorbell that allows you to see, hear, and speak to the visitor at your door whether you’re at home,
at work, or on the go.

Another example is Nanit—a connected baby monitor with computer vision. It has real-time HD video with night vision, plus temperature and humidity sensors. It’s app gives you access to recorded and live HD video streams and smart notifications.

The IoT baby monitor Nanit

Implications for UX Design

These new experiences will require new modes of interaction—modalities, yet to be designed. Touch will evolve and expand. Gestures and physical body motion will become a more natural way of interacting with the digital world around us.

The IoT space is ready for exploration and designers need to investigate the potential human interaction models, how to design for them and find ways to unlock value. The focus will no longer be on singular experiences, but instead those that represent a broader ecosystem.

The Myo armband

Designers will become involved during every stage of the design process as it will become more about designing the entire product experience.

They will need to share creative authority during the whole development cycle and effectively influence the outcome of the end product, working in collaboration with an industrial designer—for example, on what that IoT doorbell looks like, how it works, the video and sound between the two parties, and the unlocking and locking of the door.

Five Critical Aspects for Designers to Consider in the IoT Era

1) Prepare for Evolving User Interactions

Google Home connects seamlessly with smart IoT devices so you can use voice to set the perfect temperature or turn down the lights.

Just as touchscreens introduced the pinch, finger scroll, and swipe, we’ll soon be introducing other ways of interacting with IoT systems. We can expect that hand gestures will continue to be used, but we’ll begin to see even more natural movements, such as tiny finger movements, as options for controlling devices in our environment.

Google is already preparing for a future where hand and finger movements will control things in our environment. Its Project Soli is an interaction sensor that uses radar for motion tracking of the human hand.

Radar-sensed hand and finger tracking (Google’s Project Soli)

IoT will no doubt be integrated with VR. With VR, our movements mimic those of the real world. Moving our heads up, down and around allows us to explore the VR world in a natural way. We’ll be able to control our environment through commonly used arm, hand, and finger movements.

Merging the VR experience with IoT opens up many new possibilities. Imagine an Amazon Go VR version—a self-serve grocery store in a VR world where a customer “walks in” and collects items into their virtual shopping cart by picking up their choices from the store shelves with natural hand movements.

For designers, feedback and confirmation are important considerations in this new paradigm as are many of the 10 Usability Heuristics for User Interface Design. Many of these “rules of thumb” will live on:

  • Visibility of system status
  • Match between the system and the real world
  • User control and freedom
  • Consistency and standards
  • Flexibility and efficiency of use
  • Help users recognize, diagnose, and recover from errors

Voice will play a huge role. Even the act of walking will dictate some level of control. As these new controls get more refined and are adopted by users, they will become the standard by which we interact in this space, whether a screen is present or not.

Using Amazon Alexa is as simple as asking a question. Just ask to play music, read the news, control your smart home, call a car.

What about other tactile, sensory or emotive inputs? How will emotions and physiology apply to this space? Designers must get ahead of this new paradigm or risk being left behind.

2) Rethink and Adapt to Interactions of the Future

It’s safe to say that, for example, things like the ‘menu’ in a user interface will in some shape or form always be a part of the experience. And just as we saw the introduction of the hamburger menu once mobile became ubiquitous, we’ll need to explore its evolution (or something similar) more extensively within IoT environments.

You need look no further than wearables like Samsung’s Gear S3 Watch to see how menu controls might evolve.

As we create the UIs of the future and new modes of interaction, we’ll need to make sure we keep in mind the users’ expectations. Designers will still need to follow usability and interaction standards, conventions, and best practices. By evolving from what is already known, the potential of new technologies can be harnessed—innovative UIs can be designed while still maintaining enough familiarity for them to be usable.

In the not-too-distant future, our daily lives will be imbued by micro-interactions as we move from device to device and UI to UI. There will not be just one, but many interfaces to interact with in a multitude of ways as people move through their day. An interaction may begin at home on a smart mirror, continue on a smartwatch down the street and on a mobile in a taxi, and then finish on a desktop at work. Continuity and consistency will play an important part.

As IoT continues to grow and evolve, we’ll encounter never-before-seen devices, new methods of interaction, and many varieties of associated UI. Those of us who design in these new environments will need to strike the right balance between the familiar and the new.

3) Design Contextual Experiences

IoT will achieve mass adoption by consumers and businesses when products are easily understood, affordable, and seamlessly integrated into their lives. This means we need to expand beyond personalization, and begin to infuse context into the experience.

Designing for context has the potential to permeate experiences, making them more meaningful and valuable.

As we design contextual, holistic experiences that will harness the power of IoT, we need to understand that being inconspicuous, far from being a bad thing, may be the goal. When the IoT product knows you, knows where you are, and knows what you need, it will only make itself present as needed. Things will adapt to people, and before we know it, become fully integrated into their daily lives.

As we design UIs for this new paradigm, we’ll need to understand that the human-computer interaction will be dynamic and contextual—and it will change constantly. At times we’ll need to allow for controls, while at others the systems will simply relay data with notifications that are useful in that moment. Each view will be intelligently displayed in the context of that very moment via the most appropriate channel and device. This contextual design would be micro-interaction driven, timely, and purposeful.

4) Design Anticipatory Experiences

One of the most promising characteristics of IoT is the ability to predict and adapt to situations. The old model of singular actions driving singular reactions is evolving at a rapid pace.

It’s going to be more about the output without much need for input.

“Magical experiences” will be born out of awesome combinations of AI, machine learning, computer vision, sensor fusion, augmented reality, virtual reality, IoT, and anticipatory design. Rumor has it, Apple is investing heavily into AR.

We will be surrounded by a growing number of intelligent IoT systems that will automatically do things for us in a predictive manner. For example, after using it a few times, the Nest learns our habits and adjusts intelligently without us needing to get involved.

We’ll begin to see systems that will become increasingly predictive. A simple gesture, movement, or word will initiate a series of useful events. There will be a chain of events that aren’t initiated by people at all, because the system will learn and optimize its actions based on a treasure trove of data. These events could be initiated by a person’s proximity, the time of day, environmental conditions (such as light, humidity, temperature, etc.), and previous behavioral data.

More than ever, deep user research will play an important role in designing experiences that are anticipatory and contextual. Defining personas, observing user behaviors, and empathy mapping—just to name a few UX techniques—will become crucial in crafting sophisticated user experiences that will feel almost “magical” to people.

5) Most Importantly, Make It Useful!

We’re seeing tremendous advancements in the field of IoT and the role that design will play in it is about empowering people in ways that were not possible before. The demand for deeply satisfying, quality experiences will increase with high expectations and standards.

While all of the above is important, we must never lose sight of the fact that it’s about making people’s lives easier. Designing “moments of delightful surprise” in this new paradigm—along with deep empathy for the user—is a skill designers will need to develop. As we look towards an even more connected digital future, connecting us to “intelligent things” in meaningful ways will allow for more efficient interaction, more productivity and, hopefully, happier lives.

Designers will need to design IoT-driven experiences that are contextual, helpful, and meaningful—optimized for people, not technologies.

“Experiences” will trump “things.”

The next step is for designers to become involved, and design the most seamless user experiences for the Internet of Things. Technologies must evolve into “optimizers of our lives.”

In other words, become useful for people.

This post was originally published on Toptal.com BY SERGIO ORTIZ – DESIGNER @ TOPTAL

The Industry Could Do Without Pixel Density And PPI Marketing

CF
Follow us

CF

ContentFlo is a knowledge property that captures the trends in digital, technology and marketing from across the industry.

ContentFlo shares perspectives and insights that matter to enterprises in the digital economy.
CF
Follow us

This post was originally published on Toptal.com BY NERMIN HAJDARBEGOVIC – TECHNICAL EDITOR @TOPTAL

A long, long time ago, I used to make a bit money on the side designing and printing business cards, along with ad materials and various documents. I was young and I needed the cash, and so did my buddy. Some of it went towards new hardware, while much of it was burned on 3-day barbecue binges, fuelled by cheap beer and brandy.

It didn’t take us long to realize the HP and Epson spec sheets, which proudly cited insanely high DPI for their printers and scanners, were as pointless as a Facebook share button on a kinky fetish site. So, we started using cheaper, older hardware other people didn’t want, and put the savings to good use: more meat and more booze. Fast forward twenty years, we still like to work hard and afford the finer things in life, and some of them were in part made possible by tech frugality. We didn’t buy into printer DPI poppycock back then, and we certainly couldn’t care less about display PPI today.

But your average consumer does. Oh yes, most of them still think they can actually see the difference between 440 and 550 pixels per inch (PPI) on their latest gadget. I might have missed a few things over the past two decades, but either the human eye has evolved to such a degree that all kids and many millennials have better vision than ace fighter pilots, or they’re just delusional.

I’ll go with delusional, or at least immature, because I figured out my eyes weren’t that good when I was 15.

In this post I will try to explain what led the industry astray, and what developers and designers need to keep in mind when developing for this new breed of device. You may notice that I have some strong opinions on the subject, but this is not supposed to be a bland, unbiased report on a purely technical issue. The problem was not created by engineers, you’ll have to get in touch with marketing to find the responsible parties.

How Did The PPI Lunacy Get Started Anyway?

One word: Apple.

Apple was the catalyst, but it actually turned out to be the good guy in the long run. The real culprit was the Android mob.

Apple introduced the Retina marketing gimmick with the launch of the iPhone 4, which boasted a small, hi-res display that blew the competition out of the water. In fact, it still looks quite good, and there is a good reason for that: Our eyes couldn’t tell the difference in 2010, and guess what, they can’t tell the difference in 2015.

Most people associate Retina displays with the density of the iPhone 4 display, which was 326 PPI (614,400 pixels on a 3.5-inch display). This is not inaccurate; saying that anything above 300 PPI can be considered a Retina display is more or less correct when talking about a mobile phone. The same metric cannot be applied to other devices because the typical viewing distance is different. Apple’s standard for mobile phones (at the time) was 10 to 12 inches, or 25 to 30 centimetres. The typical viewing distance for tablets is often cited at 15 inches or 38 centimetres, while desktop and notebook screens are viewed from about 20 inches (51 centimetres).

You can probably spot an issue here. Did you use your iPhone 4 at the typical 10-inch viewing distance? Maybe. But what about the iPhone 6 Plus, with two extra inches? Probably not. One of the good things about having an oversized phone is that you don’t need to bring it up to your face to view a notification or a message. Sometimes I don’t even pick my phone up, I just tap it next to my keyboard. Sometimes I pick it up and shoot off a short text without taking my wrist off the table, at a desktop keyboard distance, which is much closer to what Apple had in mind for notebook and desktop screens than mobiles or even tablets.

Being the youngest, insecure kids on the block, the Android gang quickly decided they had to do something about the iPhone 4. The response was swift and came in the form of 720p smartphones, with panels measuring 4.5 to 4.8 inches. When frustrated teens try to outdo someone else, they tend to overdo it, so a generation or two later, 1080p panels became mainstream, and they got bigger, 4.8 to 5.2 inches. The latest Samsung flagship, the Galaxy S6, boasts a 5.1-inch Quad HD Super AMOLED display, with a resolution of 2560 x 1440 and, wait for it, 577 PPI. There is just one thing: The panel uses a PenTile matrix, so many people would argue that it’s not really 2560×1440. Who cares, bigger numbers sell, right?

In Samsung’s defense, the Korean giant did create a use-case for such high resolution screens, sort of. It’s a simple and relatively inexpensive Virtual Reality solution dubbed Gear VR. Google Cardboard is more of the same, but Samsung seems to be taking the whole VR trend a bit more seriously.

The Invasion Of Oversized Androids

There was a bit of a problem with this approach. Like it or not, once you start chasing pixels, you are more likely to end up with a bigger screen. This means more backlighting, more GPU load, more juice, and a bigger battery. And, how many pixels does that leave us with anyway?

Well, for a 720p display, the phone has to render 921,600 pixels for every frame. That goes up to 2,073,600 for 1080p displays and culminates in 3,686,400 on a 1440p panel like the one used in the Galaxy S6. To go from 720p to 1440p, an application processor has to figure out what to do with four times as many pixels with each refresh cycle. This, obviously, isn’t very good for battery life, although Samsung did a great job, thanks to its efficient AMOLED technology and industry-leading 14nm SoC. However, the general trend was a vicious circle and many vendors simply keep making bigger screens, to hide the even bigger battery at the back.

Apple may have started the craze, but the real troublemakers are found elsewhere, hiding behind green droids.

I know what some of you are thinking: “But consumers want bigger phones!”

No. Consumers want whatever you tell them. That’s a fact.

In this case, the industry is telling them they want bigger phones, which are becoming so unwieldy that the next thing they’ll need are smartwatches, so they don’t have to drag their huge phablets from their pockets and purses. Convenient, is it not? Huge phones with insanely high pixel densities are a triumph of cunning marketing over sensible engineering.

Besides, people also want better battery life and we’re not seeing much progress in that department. The industry is tackling this issue with bigger batteries, some of which are almost powerful enough to jumpstart a car. I wonder what will happen when one of them, built by the lowest bidder, decides it’s had enough, springs a leak, or accidentally gets punctured?

That’s one of the reasons why I always found those tabloid headlines about smartphones stopping bullets so hilarious. Sure, it can happen under the right circumstances, but theoretically, you can also win the lottery and get struck by lightning when you go out to celebrate.

Instead Of Pointless PPI, Try Using PPD

While PPI has already been rendered a pointless metric, especially in the era of convertible, hybrid devices, the same cannot be said of pixels per degree (PPD). I think this is a much more accurate metric, and a more honest one at that.

Unlike PPI, which only deals with density, PPD takes viewing distance into account as well, so the same number makes sense on a smart watch and a 27-inch desktop display. Here is how it works, taking the good old iPhone 4 as an example.

The device has a 326 PPI display and it’s supposed to be used 10 inches from the eye, you end up with 57.9 PPD at the centre of the image, going up to 58.5 PPD at the edge. This is almost at the limit of 20/20 vision. However, if you have better than 20/20 eyesight, you could theoretically benefit from a higher resolution. However, on a backlit screen, covered by smooth and reflective glass, with a pinch of anti-aliasing, few people could ever tell the difference.

The PPD formula is simple: 2dr tan(0.5°), where d is the viewing distance and r is the display resolution in pixels per unit length.

Before you start breaking out your trusty TI calculators, here’s an online PPD calculator you can use.

So, let’s see what happens with a 5.5-inch 1080p phablet (iPhone 6 Plus) if we change the viewing distance. At the standard 10 inches, we end up with 71.2 PPD, however, at 11 inches the number goes up dramatically, to 78.1 PPD. At 12 inches it stands at 85 PPD, and at 13 inches we see 91.9 PPD.

Now let’s take a look at some cheap Androids with 720p panels. The visual density of a 5-incher at 10 inches is 52.1 PPD, but since I doubt the distance is realistic (if we are using the same distance for a 3.5-inch iPhone), let’s see what happens at 11 and 12 inches: we get 57.1 PPD and 62.2 PPD respectively. An entry-level 5.5-inch phablet with the same resolution has a density of 47.5 PPD at 10 inches, but at a more realistic 13 inches, we end up with 61.3 PPD. Granted, used at the same viewing distance as the iPhone 4, these numbers look bad, but few people will use these much bigger devices at the exact same distance.

So, why am I changing the viewing distance to begin with? As I pointed out earlier, that’s something most users do without even noticing, especially on Android phones. When I upgraded from a 4.7-inch Nexus to a 5-incher with capacitive buttons, I noticed a slight difference in the way I handled it. When I started playing around with a few 5.5-inch phablets and going back and forth between them, the difference became more apparent. Of course, it will depend on the user; someone might have the exact same viewing distance with a 4-inch Nexus S and a 6-inch Nexus 6, but I doubt it. This is particularly true of Android because the UI is more or less a one-size-fits-all affair and does not take into account the loads of different panel sizes out there. Since I am a fan of stock Android, the difference was even more apparent; Lollipop looks almost the same on a 4.7-inch Nexus 4 and a white-box 5.5-inch phablet.

Apple does it differently. Well, to be honest, Apple didn’t even have to do it until the launch of the iPhone 6 Plus, because it only offered one screen size, which allowed it to optimize the user experience in no time.

Why Pixel Density Matters

Why should developers and designers care about all this? It’s mostly a hardware thing, anyway. Developers have nothing to do with this mess, just let Google, Samsung, Motorola and Apple sort it out.

Devs and designers aren’t part of the problem, but they can be part of the solution.

Like it or not, we have to waste perfectly good clock cycles and milliamps on these power-hungry things. Unfortunately, apart from optimisation, there’s not much developers can do. All mobile apps should be optimised for low power anyway, so that doesn’t make a difference. Designers can’t take into account every single resolution on every single screen size when they polish their designs. At the exact same resolution, they might have to use virtually no anti-aliasing, or moderate anti-aliasing, or go all out with some really aggressive edge-softening. It all depends on the type of device and screen size, not the resolution.

This is trickier than it sounds. Using slightly different settings for 5-inch and 5.5-inch devices with standard resolution screens sounds easy enough, but it would only address one side of the problem. Will a tall, 40-year-old Swede use a 5.5-inch 1080p phone at the same eye distance as a 14-year-old Taiwanese teen chatting with her girlfriends? Of course not.

This, among other things, is why I’ve come to despise PPI. It’s become a useless marketing number; it does not provide consumers with accurate information when they purchase a new device, and from a developer’s perspective, the PPI arms race is doing more harm than good. It’s not making hardware better in a noticeable way, yet it’s making it more expensive and less efficient. It is no longer improving the user experience, either, and in some cases it is even degrading it.

A few years ago, mobile designers had to take into account a few standard Apple resolutions and a handful of Android resolutions and screen sizes. Now, they have to deal with Apple products in more aspect ratios, resolutions and pixel densities. Android, due to its trademark fragmentation, poses a lot more challenges than Apple or Windows (Phone). While the trend has been to inch towards bigger screens and higher resolutions, a lot of Android devices still ship with 4.x-inch screens, and sub-720p resolutions. Add to that a host of legacy devices, and you end up with a pool of green goo.

Ten Easy Ways Of Wrecking User Experience On High-Res Devices

Let’s take a look at how high PPI displays have a negative impact on user experience, starting with hardware and performance issues.

  • Heavy websites are too demanding
  • Battery life and durability may take a substantial hit
  • Effect on storage, bandwidth, load times
  • Games that would otherwise run smoothly become jerky
  • SoC may be throttled, refresh rate lowered

Websites with a lot of demanding content, such as elaborate responsive sites, can be problematic even on underpowered desktops, let alone mobile devices. Five years ago, most of us relied on 1080p desktop displays and the iPhone 3GS had a 480×360 pixel display. Today, most people still use 1080p on desktop platforms, but at the same time they buy 1080p smartphones on the cheap. For some reason, people think it’s OK to place the same strain on a desktop and a $200 smartphone that has a fraction of the processing power. Toptal Software Engineer Vedran Aberle Tokic authored an excellent post dealing with problems caused by responsive websites on mobiles, so please check it out for more details.

Of course, as soon as you start pushing a smartphone or tablet to its limits, battery life takes a massive hit. So, now we have bigger batteries in our phones, and more powerful chargers, and wireless charging, and powerbanks; and we still run out of juice by sundown. This is not just an inconvenience; the battery has to endure more charging cycles, it degrades over time, and now that most smartphones ship with integrated batteries, this poses a problem for the average consumer.

Who cares if your app or website look marginally better than your competitors if they end up draining the battery faster? And, what if your gorgeous hi-res designs end up loading slower, taking up more storage, and sucking more bandwidth than the competition?

Games and other graphically demanding applications might benefit from higher resolutions, but they can also experience nasty performance issues. Casual games that don’t stress the GPU to its limits can look muchbetter in very high resolutions, and they can be smooth even on underpowered hardware. However, 3D games, even casual ones, are a different story.

I am no gamer, and it’s been more than a decade since I was hooked on a game (Civilization, of course). However, I recently discovered World of Tanks Blitz for Android, and experienced a relapse, so here is some anecdotal evidence.

The game is easy to master, fast-paced, doesn’t require wasting hours per match, and it combines my love of history, technology, trolling people and blowing stuff up. Since I never install games on my phone, I tried it out on a 2048×1536 Android tablet, powered by a 2.16GHz Intel Atom Z3736F processor with 2GB of RAM. UX is good; after all, this is a popular game from a big publisher. Prior to the last update, the system would set the graphics preferences automatically and I was happy with overall performance, about 30 FPS in most situations (dipping to 20+ in some situations). However, the last update allowed me to tweak graphics options manually, and then I got to see what I was missing out: much better water shaders, dynamic shadows, fancier particle effects and so on. I tweaked the settings a bit, but had to trade a lot of eye candy for performance.

With that particular hardware platform, the game would have been able to run at maxed out quality settings at 1024×768, at a substantially higher frame rate. In other words, my user experience would be better on a cheaper and slower device, with just one quarter of the pixels. Changing the resolution would obviously solve everything, but it can’t be done.

Reducing the load would also allow devices to run smoother for longer periods of time, without having to throttle their processors, automatically reduce screen brightness and so on. In some cases, hardware vendors even opted for lower screen refresh rates to preserve battery life and reduce load.

This brings us to aesthetics, and ways of messing up UX on hi-res devices that have nothing to do with performance issues:

  • Reliance on rasterised vs. vector graphics
  • Use of resampled images
  • Viewing old low-res content
  • Using legacy apps
  • Inadequate or overly aggressive anti-aliasing

Although vector graphics play a prominent role in design, we still have to rely on rasterised images for a lot of stuff. Vector graphics are more or less useless for everyday content delivery. For example, when developers create a simple news reader app, it might look magnificent, even on a low budget, on all devices. However, if the content provider doesn’t do a good job, the sleek and sharp design will be ruined, with inadequate visual content, such as low resolution images and video, compression artefacts, bad anti-aliasing, and so on. If forced to reuse older images, they may be tempted to resample them, making an even bigger mess.

The same goes for old content and apps. Not all websites look better on high resolution displays; not all websites are regularly updated to take advantage of new hardware. Ancient CSS does not look good on high PPI devices. Older apps can also misbehave, or end up with a broken UI.

Anti-aliasing can be another problem, but one of the ways of making sure it’s spot on is to rely on PPD rather than PPI. Of course, there is only so much developers and designers can do, especially if their products rely on third-party content, uploaded and maintained by the client.

Things Will Get Worse Before They Get Better

During any period of rapid tech evolution, teething problems are bound to occur. The fast pace of smartphone development and adoption has created numerous opportunities, along with more challenges for developers.

This high resolution race won’t go on for much longer; it’s impractical and becoming pointless. High-res screens are already shipping on low-cost devices, and the trend is going to slow down before it comes to a grinding halt. In the end, we will end up with a few standard resolutions from $200 to $1000 devices, and that’s it. There is a lot of room for improvement on other fronts, specifically, battery life and overall user experience.

Still, I think it’s a good idea to keep an eye on market trends and keep track of sales figures, just to be one step ahead and to know what to expect. It’s almost as important as tracking the spread of different OS versions and platform market share.

Unfortunately, there is not much developers and designers can do to tackle many of these issues. In my humble opinion, the best course of action is to keep clients in the loop, make them aware of potential issues beyond your control and issue clear guidelines on content that should be added to websites and mobile apps.

This post was originally published on Toptal.com BY NERMIN HAJDARBEGOVIC – TECHNICAL EDITOR @TOPTAL

Welcome to the Idea Economy!

Alok Ranjan
Follow me

Alok Ranjan

Alok Ranjan is a marketing specialist and management consultant based in Mumbai, India. He believes, brands are caught in a maze of technology and economic dynamics, caused by disruptive forces, which are changing the way consumers interact with brands. He blogs at www.alokr.com
LinkedIn: in.linkedin.com/in/ranjanalok
Alok Ranjan
Follow me

Latest posts by Alok Ranjan (see all)

The value of an Idea lies in the using of it
– Thomas Edison

In February 2016, Kickstarter, the largest crowdfunding platform, celebrated its 100,000th project funding for a visually striking photography project that leveraged a system that didn’t exist seven years ago. Talk about Pebble Smartwatch, one of the most successful alternatives to Apple Watch, or Fitbit that tracks daily physical activities and keeps the user motivated, or Gogoro, the electric scooter that aims to transform urban mobility or Hointer, a retail store that is revolutionizing customer experience – all have a commonality that makes them successful. These innovations which were unthinkable until a few year’s ago are transformative idea that holds the power to shape the future of mankind. It is interesting to note that only 71 firms still exist from the original 1955 Fortune 500 list, while the others couldn’t generate ideas that could keep them in business. Welcome to the Idea Economy!

The story of David and Goliath no longer holds true in an Idea Economy where a well-executed and disruptive idea creates an equitable platform for anyone to win. The fall of Kodak, Blockbuster, HMV and Borders are attributed to bankruptcy of ideas that failed to adjust to the market demands. A few years ago, Facebook invested 2 billion in virtual reality – an immersive technology that holds the power to completely change our experiences. Oculus Rift is not a product, but an idea that Facebook could foresee modeling the course of the future. It is shocking to learn that the highly rated enterprises on earth – Apple, Google and Facebook are no longer top contenders in 50 smartest companies in the world as ranked by the MIT Technology Review, 2015. They trail behind the most innovative idea originators of modern times – Tesla Motors and Xiaomi.

Driving the change

The technology democratization has led to disruption of business and operating models of both large and small enterprises. The ability to conceptualize and turn an idea into reality is the originator of a new product or service. In an Idea Economy, disruption is a way of life and no idea is insured against it. The survival of an enterprise is dependent on their ability to visualize the future, leverage technology, and their ability to respond to macroeconomic threats.

Technology serves as the backbone that supports an idea in its journey to reality. Using the right tools at the right time creates a world of opportunities for both the startups and large enterprises. Legacy technologies no longer address the need of a connected customer and enterprises are busy transitioning to the new age disruptive technologies that scores over competition in an Idea Economy.

Measuring the success

In an Idea Economy, success is a relative term often relied on an enterprise’s ability to uphold bold decisions and undertake risks. Infusing agility in business processes, scaling infrastructure to support the new business model, and being flexible to market dynamics are key priorities to reap benefits in an idea economy. It is pertinent for enterprises and startups to acquire right skill in order to survive in a hypercompetitive world. The Idea Economy expects enterprises to follow and imbibe startup culture, which is devoid of organizational and technological silos; risk taking is a cultural norm and the management fully believes in the idea to turn it into a reality.

“We’re now living in an Idea Economy where success is defined by the ability to turn ideas into value faster than your competition.”  – Meg Whitman, Chief Executive Officer, Hewlett Packard Enterprise

As Meg Whitman puts it, it is true that in an Idea Economy ignoring business processes and operational challenges are sure shot recipe for disaster, even though backed by a strong technical team.  Careful examination of the business nuances and effective execution guarantees a sustainable business idea.

At a time when an idea can spawn itself at zero cost through the Internet, it is imperative that the time to market is addressed judiciously. As both enterprises and startups gain access to the same technologies, their success depends on the ability to fructify the idea. An Idea Economy provides a level field for everyone ready to defy the status quo and consistently delivers value to customers while surviving disruption.

This article was originally published in The Consultants Review Magazine. Click here to download your copy.

How Google Works

CF
Follow us

CF

ContentFlo is a knowledge property that captures the trends in digital, technology and marketing from across the industry.

ContentFlo shares perspectives and insights that matter to enterprises in the digital economy.
CF
Follow us

How Does Google Work?

Infographic by PPCBlog