Israel’s Gingee unveils Curve cross-platform tool for app metrics

Gingee's Curve delivers cross-platform app metrics.


Israeli startup Gingee is launching its Curve cross-platform app metrics tools today to give app marketers and developers a single dashboard to review metrics from multiple mobile platforms. The tools are aimed at reducing development and marketing complexity for those hoping to reach wider audiences with their apps.

Curve helps app makers and marketers deal with issues such as why a user experience is working well across all platforms and functions. By solving these problems, growth hackers, marketers, and user-experience staff can get answers about time spent in an app, return visits, crashes, loading times, and more.

Gingee says that Curve is different from competing solutions because it focuses app performance metrics including user behavior, crash analysis and monetization across all the operating systems – in real-time. It goes beyond ‘What’ is wrong by explaining ‘Why’ the application isn’t achieving the desired result. It’s this functionality, which enables meeting the cross-organizational needs of the growth hacking, marketing, user experience, development, and quality assurance teams, the company said.

Curve lets the developers see the user experience on each screen and decipher the problem by putting the developer in the position of a user just before a crash. This makes it easier to find bugs and reduce abandonment rates. Curve is customizable for different key performance indicators (KPIs). And it can run in the cloud or be installed on a client. Curve has auto-tagging, which means a developer won’t have to tag a part of the app for review but can automatically capture the customer’s entire journey before, during, and after an event.

Back in February, 2014, Gingee unveiled development tools today for mobile developers who want to make cross-platform apps. At the time, Gingee pointed out there were thousands of different Android devices running a variety of versions of Android — along with smartphones and tablets working off iOS, BlackBerry, Windows Phone, and other platforms. Those tools served the same purpose of helping developers work with a truly cross-platform solution with a drag-and-drop user interface.

Gingee claims its integrated development environment (IDE) automatically generates high-end code. Gingee says its cross-platform application development technology is based on a proprietary algorithm which analyzes relations between objects and maps them uniformly to all operating systems and devices, ensuring a consistent interface and user experience across all devices and operating systems.

“While working on our cross-platform app development solution, we understood the need for a cross-organizational app intelligence tool to support the needs of the growth hacking, marketing, development and user experience teams,” said Roei Livneh, CEO and cofounder of Gingee. “Curve completes Gingee’s development philosophy of offering app developers a solution to ‘Build, Convert and Analyze’ their applications.”

Livneh started Gingee in 2011 with $1.2 million in backing from angel investors. The company has 13 employees. The development environment competes with rivals such as Unity Technologies, Corona, and Apportable. Gingee said millions of users have benefited from apps using the tools. The new metrics tool competes with business intelligence solutions form Appsee, Adjust, and Tune.

“We’ve worked with several user funnel / app intelligence solutions; each provided us with actionable insights on one functional area – marketing, user experience, product or Q&A. Curve was the only solution capable of supporting all of our organization’s needs,” said Ori Zimber, CEO of Fame Boost Games, the developer of Big2 Bonanza, a multiplayer card game on Facebook and Google Play.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

VR startups: 4 lessons to learn from the AR hype cycle

Dean Takahashi Oculus Rift virtual reality headset


Back in 2008, augmented reality (AR) could do no wrong. There was unbelievable hype around the technology and a growing amount of interest from just about every brand, company, and investor. Fast-forward three years and by 2011, if you were an AR company, you might as well have worn a scarlet letter.

The disillusionment of AR technology was quick and painful from brands, agencies, and consumers. AR tech was limited, didn’t match the vision of concept videos and demo videos of AR tech showed optimal conditions versus how the AR tech would perform in the real world. Expectations were not met, and while some AR companies survived the hype cycle or were acquired, most AR companies closed their doors.

Now, five years later, with HoloLens, Kinect, Project Tango, Magic Leap, and other next-generation devices poised to take AR to its impending renaissance, AR’s tech cousin, virtual reality (VR), is entering its own hype cycle. With VR poised to bask in the spotlight as the “it” technology of 2016, what can VR companies learn from the lessons of the prior AR hype cycle?

1. Focus on your platform, not custom brand executions

Approximately 75% of the inquiries my AR company, Zugara, received from 2008-2011 involved custom AR applications. These often involved customizing our technology that didn’t help advance or build upon our platform. Some of the custom inquiries had budgets associated with them, most did not. As a VR company, it’s critical to focus on potential projects that advance your platform with minimal customization. Without any initial funding, this will be harder to do, but it’s important that you’re not recreating the wheel for others at the expense of your own product or platform.

2. Beware of the PR pyramid scheme

Like AR in 2008-2011, there is a lot of PR value for VR companies to take advantage of. However, this often comes at the expense of obtaining revenue. During this current VR hype cycle, there will be brands and agencies trying to leverage PR “value” in exchange for you absorbing the development cost for their initiative. Nothing gets the blood boiling more than a multibillion-dollar brand or tech company trying to get your small startup to subsidize costs for their brand or product. This can quickly devalue your product or platform, so choose your PR opportunities carefully and strategically.

3. Develop a product strategy especially for outbound sales

When an industry is hyped to the extent VR is right now, most of your sales and product inquiries will inevitably be inbound. From 2008-2011, approximately 95% of our inquiries were inbound. This was great for a while, but soon the inbound sales spigot shut off. VR hype will go through the same cycle as marketers and agencies leverage VR for its initial PR value before moving on. With consumers, you’ll have a longer runway, but you’ll need to be careful about the next point …

4. Enjoy the hype environment, prepare for the future, and avoid “And then what?”

From 2008-2010, AR projects involving black and white markers (and eventually images) were all the rage. The problem? They soon became a “been there done that” type of experiential project for most brands and agencies. There was nothing of value for the consumer beyond the initial gimmick of viewing an animation via a webcam or mobile browser. It’s not hard to see comparisons with current VR simulations that involve sitting or standing and viewing an environment. Though these 360-degree simulations seem new now, it’s only a matter of time until they lose their luster as the next BSO (Bright Shiny Object) lands on marketers and consumers’ radars. So it’s important to continue to evolve your technology and platform so you’re never asked, “This is cool … but then what?”

It’s an exciting time to be both in the AR and VR industry. Both technologies have seen their ups and downs over the years, but now that the technology is mature enough to bring the vision of AR and VR pioneers to life, it’ll be an exciting future if we can all avoid a new hype cycle.

Matthew Szymczyk is CEO of Zugara.

Cloudlands: VR Minigolf is a zany way to experience virtual reality with the HTC Vive

Cloudlands: VR Minigolf lets you play golf by swinging a putter in VR.


Golf games are usually a standard kind of sports title on a new video game platform. But it only works so well when you’re pretending to swing at a ball with a 16-button controller.

But in virtual reality, you can get much more into the actual experience of playing golf, and this game has the potential to reach a much wider audience than many VR titles. That’s what Futuretown is trying to deliver with Cloudlands: VR Minigolf, a new game coming soon for the HTC Vive. The title is one of a dozen releases that I saw for the HTC Vive that debuts in April on the PC. The system is Valve and HTC’s bid to be a player in what could be a $30 billion industry by 2020, according to tech advisor Digi-Capital. I saw the game at a Valve event in Seattle.

With the HTC Vive, you wear a VR headset, so you can see into an immersive virtual world. But your hands are also free to be used in virtual reality. You hold independent hand controllers in each hand, and sensors in the room can track the full motion of your body with precision. The system also designates a safe area where you can walk without bumping into your furniture.

Cloudlands: VR Minigolf

Above: Cloudlands: VR Minigolf

Image Credit: Futuretown

That makes the Vive a very interesting system for playing games like miniature golf, which requires a lot of detection of subtle movement. Previous motion-based games on the Wii and Xbox’s Kinect systems are pretty inaccurate. But with the Vive, your ability to move and be detected precisely is pretty good.

Futuretown used the sensor system to develop a physic-based mechanic around putting. You hold the controllers as if you were holding a real putter. You swing the putter back and then hit the ball. The sound of the putter hitting the ball is pretty good, but when I played, I got the sense that the force required to make the ball move the right distance was off. I hit the ball and my stroke was way too soft. Hopefully they can adjust it so you don’t have to do a mighty swing every time you hit the ball.

The environment around the game and the creativity of the course layout is where the game shines. There are cool natural landscapes and very weird contraptions like you find in the best miniature golf courses. You can hit a ball through the rotating parts of a windmill. But you also have to send a ball down multiple levels and through impossibly small gaps in barriers. Don’t be surprised if it takes you ten strokes to get a ball into the hole.

An impossibly tough hole in Cloudlands: VR Minigolf

Above: An impossibly tough hole in Cloudlands: VR Minigolf

Image Credit: Futuretown

But this game should appeal to players of all ages and all skill levels. It’s the kind of game that could widen the audience for VR.

The company has raised money, but it isn’t saying how much yet. It has 17 employees in Taoyuan, Taiwan, and Vancouver, Canada. The founders are Johan Yang and Justin Liebregts, who started on the Cloudlands: VR Minigolf title after they visited HTC’s headquarters in Taiwan in March 2015. The game is expected to launch in April with the launch of the HTC Vive.

“We decided VR was the ‘thing’ we want to do for the rest of our lives,” they said.

The team is also working on other VR titles such as a shooter dubbed Jeeboman. Cloudlands: VR Minigolf will also be available on the Oculus Rift.

Adtech isn’t in trouble — it’s just misunderstood

programmatic


Adtech took a very public beating in 2015 – falling stock prices throughout the year and an IPO market that has effectively been shut since mid-2014 have led to speculation about the state of the whole sector. However, research by my firm shows that, while the number of M&A deals was down in 2015, the decline was relatively modest at eight percent. There were still over 400 completed deals in the adtech and martech sector last year, and multiples remained healthy.

So what’s behind these mixed signals?

There was a flurry of adtech IPOs in 2013 and early 2014, as private companies took advantage of investor appetite for the fast-growing adtech sector. With hindsight, many of the companies that went public did so too early. The market for their solutions was still nascent, and they were unable to meet investor expectations for revenue growth. Furthermore, those investors were relatively uneducated about the market and clearly held a number of misconceptions.

This situation was exacerbated by the way some of the companies were positioned as they went public. While they were represented as being Software-as-a-Service (SaaS) businesses, their revenue models were actually transactional and tied to media spend. This doesn’t have the stickiness and revenue visibility of SaaS. A re-rating of the sector was inevitable as companies missed growth targets and investors became better versed in how the market actually operates.

The performance of a dozen or so listed companies does not, however, reflect the overall health of the sector.  The private company market is buoyant, deal activity is high, valuations are healthy, and innovation is rife.

That said, let’s start with the bad news: Undoubtedly the shake-out of the public adtech market has, to a certain extent, filtered down to the private one. Raising venture capital has become harder — many VCs consider themselves overexposed to adtech and are more highly attuned to innovation and differentiation than 12-24 months ago. Businesses that don’t have proprietary technology and a differentiated offering have had to realign their valuation expectations.

However, at the same time that public market investors have been losing their appetite for the sector, interest from strategics has been growing, and this isn’t restricted to outright acquisition. Sky’s recent $10 million investment in DataXu is a good example of this – a new entrant into the sector providing growth funding, which might otherwise have been provided by the public markets or late stage VCs.  Similarly, the venture capital arms of Google, Salesforce, Unilever, and Hearst Media have all been actively investing in the sector throughout 2015. Simply put, growth capital is coming from a wider range of sources.

Adtech and Martech Deals Q1-Q4 2015.ai

One of the unique features of the adtech market is the sheer number of companies for whom advertising technology is strategically relevant.  Not many businesses across the breadth of the (cash rich) technology, media, and telecom universe can ignore the disruption that’s occurring in advertising and marketing. For some, such as the enterprise software vendors, it presents an opportunity to integrate additional revenue streams. Conversely for traditional media players, notably publishers and broadcasters, it offers a defensive mechanism to protect those revenues. Other interested parties include telcos, traditional data groups, social networks, and other diversified Internet groups, and consultants, not to mention the pure-play adtech and martech consolidators themselves. An increasingly competitive market coupled with a frenetic rate of innovation suggests adtech M&A will remain buoyant.

The other critical factor in the ongoing strategic interest in this sector is the shift in momentum from adtech to martech. The public markets now make a clear distinction between each segment.  While definitions vary, the market generally thinks of adtech as technology designed to streamline the buying and selling of digital ad inventory, where the revenue model is tied to media spend or other performance-based metrics. By contrast, martech is software sold on a subscription basis to facilitate and optimise the broader marketing function. For investors, the subscription-based revenue model is generally most attractive because it provides greater revenue visibility and (potentially) stickiness. In reality the picture is more blurred, but this shift has also become apparent in the private market, and many of the very high multiple deals in 2015 were in the martech segment.

I can’t overstate adtech’s importance, though — media spend represents over half of most marketing budgets, and global ad spend is $570 billion per year.

In reality the sectors are (slowly) converging, and the end game will surely see vendors providing integrated marketing and advertising platforms to brands. This convergence will take longer than some commentators are currently predicting, but it will underpin continued M&A activity in the sector.

Without doubt, the adtech sector faces challenges — continued consolidation behind the “walled gardens” of the digital giants, such as Amazon and Facebook, the impact of ad blocking, ongoing data privacy concerns – but the opportunities are as great as the challenges. As in every tech sector, startups and growth companies will drive innovation, and the large strategic acquirers will continue to bring that innovation in-house through M&A throughout 2016.

Julie Langley is a Partner at M&A and fundraising advisory firm Results International. She has over 15 years experience advising technology and digital media companies on corporate transactions, including company sales, acquisitions, financings, MBOs, and joint-ventures. She has completed transactions with companies including Oracle, Microsoft, Experian, Moody’s, IAC, BT, Axel Springer, CNET, and DMGT.

Depth-sensing cameras will open up a whole new frontier for smartphones

digital city


[Full disclosure: Body Labs is backed by Intel Capital and is working with Intel RealSense to develop 3D body scanning software for smartphones.]

2016 marks the beginning of a fundamental leap forward in smartphone hardware: depth-sensing cameras. We’ve already seen accelerometers, gyroscopes, barometers, cameras, and fingerprint sensors become common on even the most budget-friendly smartphones. And, due to the accelerated hardware arms race between the world’s top manufacturers, we’ll see depth-sensors on some consumer tablets hitting the market this year.

These sensors supplement today’s monocular RGB images with per-pixel depth information (often derived by projecting a pattern of 3D infrared light into a scene). The technology will enable enhanced object, body, facial, and gesture recognition.

Depth sensors will not only make a smartphone more aware of its immediate environment but will also improve the ability to accurately isolate and identify a user’s body in space. Ultimately, this will spark an explosion in consumer applications ranging from virtual apparel try-on to personalized VR experiences mapped to your living area.

The true game-changing element will be the form factor, not the technology. Depth-sensing cameras are hardly new — Microsoft released the Kinect in 2010. However, if you’re Google Maps trying to navigate indoors or Oculus attempting to improve the immersive experience of virtual reality (VR), you need to solve what I refer to as an “input barrier” with enabling hardware. This barrier largely consists of three main challenges:

1. Cost. Previously, comparable technology has cost anywhere from $10,000 to more than $250,000 for a high-end laser scanner — not within the range of the average consumer. However, the recent commoditization of depth-sensing cameras has made the cost of implementing them into smartphones more justifiable.

2. Convenience. With sensors coming from Intel RealSense, Google Project Tango, and Apple PrimeSense, depth-sensing cameras are now small enough to be included in smartphones. By being included in a smartphone, the user also only has to commit to purchasing one device.

3. Adoption. This technology isn’t valuable if a large number of hardware manufacturers refuse to adopt it. Fortunately, the smartphone industry benefits from a very fast product refresh cycle. Unlike televisions, which are upgraded by U.S. consumers every seven or eight years, smartphones are upgraded approximately every 18 months. It’s why more Americans could have a depth sensor in their phone before having a 4K TV in their living room.

With Intel, Google, and possibly even Apple poised to push their sensors onto mobile devices this year, the data from 3D sensors could quickly become a viable platform for developers to build on. Building software around 3D data is challenging, but companies like mine are already working to transform the raw data generated from these sensors into easy-to-use 3D models that enable new applications and functionality. As a result, this new platform could disrupt several major markets before the year is over:

1. Digital photography

By incorporating 3D information with photos and video, we will have new options when it comes to editing digital content. For example, you could automatically remove and replace the background of an image, or segment (e.g., “cut out”) a specific object for use as a standalone graphic, which could become a valuable feature of smartphone photography.

2. Mapping and navigation

Google Maps is the most widely used navigation software in the U.S., but it’s usefulness ends when entering a building. With no access to GPS, depth-sensing technology can provide mapping applications with accurate 3D models of building interiors. They can also provide a user’s position and orientation within these buildings to guide them directly to a product or service. The University of Oxford has also been experimenting with depth sensors for a few years to provide the visually impaired with a set of “smart glasses” that could also assist them in navigating through the world around them.

3. Fashion and apparel

Apparel fit has been estimated to be a multibillion-dollar problem for many retailers with more than a third of their online sales returned due to inaccurate sizing. But depth sensors in smartphones could enable accurate sizing recommendations and custom tailoring without a user having to leave their living room. Retailers can use applications that provide sizing recommendation engines such as True Fit with additional tools that can capture personalized body shape to drive down their returns and improve their knowledge of their customers.

4. Virtual reality (VR) and augmented reality (AR)

A challenge with VR is enhancing the sense of presence related to three major factors: 1) the use of your hands, 2) occlusion — the effect of one object blocking another from view, and 3) moving into the environment. By using a VR headset like Samsung’s Gear VR — enabled by a depth-sensing smartphone — a game could identify obstructions in the real world to inform it how to animate them in the virtual one. By also maintaining a sense of presence in reality — as well as virtually — users could roam freely about the game and also customize it to their living space.

5. Product design and 3D printing

The 3D printer market is estimated to grow to $5.4 billion by 2018. With depth sensors, users could quickly scan real-world objects or people from their smartphones in a matter of seconds. Artists could then seamlessly build, print, and manufacture personalized products at scale. This technology will reduce the expertise required and the overhead demanded to design and print in 3D. We’re now seeing companies such as Nervous System that are designing in 3D and then using a Kinematics system for 4D printing that creates complex, foldable forms composed of articulated modules. This, potentially combined with companies like Voodoo Manufacturing, which is delivering fast, affordable and scalable 3D printing, would drive down the cost and time associated with product development cycles.

6. Health and fitness

There are more than 138 million total health and fitness clubs worldwide with an estimated market size of $78.17 billion. These clubs have three priorities: 1) bring in new members, 2) retain current members, and 3) get existing members to spend more on additional services. In order to justify new services, health clubs are looking to equip trainers with depth-sensing cameras to efficiently visualize recorded progress like weight loss or muscle growth during a workout regime. This could also unlock new features for apps such as Google Fit, Apple’s HealthKit, and Samsung’s S Health by enabling them to track shape change over time. Companies such as VirtualU are also taking 3D scanning technology and partnering with health clubs to providing vivid health and fitness metric tracking beyond antiquated measurements like BMI.

Those are just a few of the many possible applications for depth-sensor enabled smartphones. But it will take hard work and a lot of investment to bring this potential to life. Even high-quality 3D sensors are only useful if supported by a robust collection of software libraries.

And sensor makers will have to adhere to a standard set of APIs to prevent platform fragmentation, an issue that’s already prevalent in the smartphone industry. Such APIs will also need to mitigate what is currently a steep learning curve when it come to application development around 3D images. From first-hand experience, my colleagues and I can attest that working with raw RGB-D data currently draws too heavily on PhD-level machine learning and requires more than a passing familiarity with relevant academic research.

We anticipate that companies releasing new 3D sensors will need to invest heavily in the software development resources these sensors require. Even so, the enormous potential value of these new devices will more than outweigh the investment needed for its adoption. With depth sensors making their way into devices this year, the currently iterative smartphone industry is potentially set for another exciting shake-up.

Eric Rachlin is a cofounder of Body Labs and leads the company’s product development. Before Body Labs, he worked as a senior research scientist at the MPI for Intelligent Systems, where he helped manage a team of computer vision researchers to build a statistical model of human pose and shape.