Steve Jobs’ “State of the Apple Union” Conference Call

October 21, 2010 Leave a comment

Apple held an event yesterday to showcase some upgrades to products–most notably a new version of OS X and an upgraded Air. But instead of rehashes of product announcements I’d like to look at a few things Steve Jobs said during a conference call earlier this week, which provided a little more insight into Apple and Jobs’ own opinions on some trending issues. Quotes are taken from the transcript at Mac Daily News.

The Infamous Open vs. Integrated Debate

The first thing most of us think about when we heard the word ‘open’ is Windows, which is available on a variety of devices. Unlike Windows, however, where most PCs have the same user interface and run the same apps, Android is very fragmented. Many Android OEMs including the two largest, HTC and Motorola, install proprietary user interfaces to differentiate themselves from the commodity Android experience. The user is left to figure it all out.

Jobs (and many other people) seem to miss the point of Android’s “openness”.   The definition of “open” (aka “open-source”) means simply that the source code is available to anybody for free, to download, tweak, and use.  This is exactly what Android head Andy Rubin was referencing in his now-famous first tweet.  The points that Jobs brings up are indeed true:  Android is very fragmented (probably the biggest problem for Android at this point) and OEM proprietary software does confuse (and annoy) users.  However, these are some of the problems with such an open product.  The fragmentation comes from Google’s lack of control over each manufacturing company and how they use Android.  Google is only in charge of working on the OS itself, fixing bugs and releasing the software whenever it is ready.  They allow the manufacturers to basically do anything they want with the Android OS.  However, all the phone manufacturing companies all over the world are simply unable to keep up with the pace of each release and update, opting instead to focus on a major release, tweaking it for their specific hardware, and worrying about the updates later.  This is also true for app developers, who don’t have the resources to develop for every new Android release, a point which Jobs also referenced.  This is why so many phones have Android 1.6 and 2.1/2.2, and why some apps only work with 2.2 or higher (the offical Twitter app, for one).  Even though many phones are still running Android 1.6, why would Twitter waste resources developing for a dead-in-the-water version of Android?  The fragmentation issue is mostly an issue of physical companies keeping up with the digitally-released software.  There will always be a lag between software release and product release due to development and manufacturing, at least when the whole process isn’t being overseen by one company–ala Apple.  This is what allows Apple the tight integration and complete product releases.  For one, they surely are not releasing software updates to the general public the moment they are finished.  Is the Google method of focusing on software, releasing as much and as soon as possible, and letting the manufacturing and development companies catch up and do all the dirty work better or worse than the Apple method of overseeing everything and releasing one tightly-integrated product?  They both have pros and cons, and are simply different ways of doing things.

Since the Android source code is so readily available, it allows manufacturers to build on top of the base, customizing their product to their own standards and yes, fragmenting the Android experience.  What is Google’s answer to this?  Most likely: nothing.  The point of making Android open-source is to allow this very activity.  Google is relenting power to the companies which are using their product, to develop and battle for consumer dollars on their own ground, sort of encouraging capitalism within their partners.

In reality, we think the ‘open’ vs. ‘closed’ argument is just a smokescreen to try and hide the real issue which is: What’s best for the customer? Fragmented versus integrated. We think Android is very, very fragmented and becoming more fragmented by the day. And, as you know, Apple strives for the integrated model so the user isn’t forced to be the systems integrator. We see tremendous value in having Apple, rather than our users, be the systems integrator.

The question of “what’s best for the consumer?” is a loaded question.  “Best” is a relative word.  You could argue what’s “best” for most consumers, or a large majority, but to lump every consumer into the same category is sort of disingenuous.  Now I don’t want to base my following argument solely on myself, but I know many people with the same attitude: I purchased an Android phone arguably because of the fragmentation.  Right now I have at least five different Android ROMs backed up on my computer, and have run countless Android versions on my phone (everything from 1.5-2.2).  I greatly enjoyed the long painful process of unlocking (enabling root access) on my phone by flashing the onboard memory with custom software, and revel in the freedom I have with what ROM or software I am able to run on my phone.  If I had never rooted, I would still be stuck at the mercy of my cell phone service provider, only getting updates when (and IF) they decided or got around to it.  That “integration” of provider, hardware, and software is not for me, and not for many other people as well.  And I’m not so naive as to think people like me are the majority of consumers (far from it), which is why I completely understand Apple’s approach.  Jobs is entirely correct when he says there is “tremendous value in having Apple, rather than our users, be the systems integrator.”  Most people wouldn’t know the first thing about running unsigned software, and they probably have no desire to either.  However, I only ask that we don’t totally discount the people who do want to be the integrators.

Oh lord, is there more??

The (7″) Tablet Debacle

I’d like to comment on the avalanche of tablets poised to enter the market in the coming months. First, it appears to be just a handful of credible entrants, not exactly an avalanche. Second, almost all of them use 7-inch screens as compared to iPad’s nearly 10-inch screen. Let’s start there.

One naturally thinks that a 7-inch screen would offer 70% of the benefits of a 10-inch screen. Unfortunately, this is far from the truth. The screen measurements are diagonal, so that a 7-inch screen is only 45% as large as iPad’s 10-inch screen. You heard me right: Just 45% as large

In case you don’t keep up with unreleased and speculative product announcements, it’s almost an understatement to say there is an avalanche of tablets poised to hit the market in the coming months.  Despite the dubious use of the word “credible” (most likely: “anything not an iPad”), tablets are set to be the next “it” gadget to have, and people are going to have their slew of options.  The first wave does indeed seem to be 7″ tablets, so let’s see what all the fuss is about.  To be honest, I truly think the size issue is totally personal preference and should be based on what the tablets will be used for.  Personally I agree with Jobs and think that 7″ is the perfectly bad in-between size to be equally useless as a pocketable phone and easily-readable magazine (iPad) size.  But someone might find that 7″ is perfect for his or her life, and having the option is definitely better than not.

Almost all of these new tablets use Android software, but even Google is telling the tablet manufacturers not to use their current release, Froyo, for tablets and to wait for a special tablet release next year. What does it mean when your software supplier says not to use their software and what does it mean when you ignore them and use it anyway?

This is a valid point.  Not to sound apologetic, but I often wonder if Google anticipated Android being used on tablets, or if they are (in a rare, exact opposite case of what I explained earlier) playing catch-up to the hardware manufacturers.  There are some Android 2.2-based tablets coming out (mostly ultra-cheap hardware from China), but Jobs fails to mention Windows 7-based tablets at all.  And what happens when manufacturers decide to use 2.2 on their (mostly ultra-cheap) tablets?  Nothing world-shattering or device-breaking.  The tablets will still be usable, and since the devices which are using 2.2 are mostly sub-$150, nobody will really be surprised if they don’t work perfectly.  I’d bet on the actual hardware breaking before the software becomes unusable.

And lastly:

Our potential competitors are having a tough time coming close to iPad’s pricing, even with their far smaller, far less expensive screens. The iPad incorporates everything we’ve learned about building high value products from iPhone, iPods, and Macs. We create our own A4 chip, our own software, our own battery chemistry, our own enclosure, our own everything. And this results in an incredible product at a great price. The proof of this will be in the pricing of our competitors’ products which will likely offer less for more.”

While I’ve been cordial and even agreed with Jobs up until now, I have to call him out here and say this is almost an outright lie.  The cheapest iPad version (16GB, Wi-fi only) is $499 on the Apple website, but double the memory to 32GB and the price goes up $100.  Double again to 64GB, add another $100 (the same price for both a 16GB and a 32GB upgrade?).  Almost every comparable tablet has expandable memory, which means you can use micr0-SD cards (16GB cards are under $30) to upgrade the memory ad infinitum.  The barebones iPad may be aggressively-priced, but start adding features and the price argument loses its weight fast.  And yes, I am aware of the overpricing on the new Samsung Galaxy Tab, but including features such as 3G (baseline 3G iPad = $629) they are comparable in almost all aspects besides size: the Galaxy Tab is Jobs’ favorite form-factor at 7″.

So no matter what happens–and things will start popping off this holiday season–old Steve summed things up quite well at the end of his call:

Sounds like lots of fun ahead.

Advertisements
Categories: Science/Tech

Mini-rant: Push Notifications

October 21, 2010 Leave a comment

In this day and age why aren’t all applications and services using push notifications? The implication that phones (or any other Internet-connected device, gadget, or even PCs) should be using their limited CPU and power to ping massive server farms (which have resources to spare, see: cloud computing) for updates every minute or so–and not the other way around–is almost unbelievable.

Granted, many apps such as Gmail and Twitter with its real-time API are catching up to the curve.  Apple has even set up its own service for these notifications, taking the burden off the device (seen above).  But there are still way too many apps with the backwards “Check for notifications/updates every __ minutes” option.

The change will occur when people realize that the server has a duty to be active in processing and distributing information and not passively idling until someone pings it. When the paradigm changes there will be a small but noticeable increase in the end-user experience: phones will not be constantly working to check if you have new email or Twitter mentions, which means a saving of CPU usage and battery. There will also be the small bonus of getting these updates almost instantaneously as opposed to getting a batch every five minutes or so, which will make it seem much more like we are living in the present instead of in five-minute increments. And yes, I did just equate the idea of “living” with how synced we are with the wired, online world. Welcome to the future.

Categories: Science/Tech

Blockbuster Files for Bankruptcy; and What About Blu-Ray?

September 24, 2010 Leave a comment

Today brings news that brick-and-mortar movie rental giant Blockbuster has filed for Chapter 11 bankruptcy.  This comes as little surprise to anyone with a Netflix account or a computer with a decent internet connection. However, there are some greater implications:  can a physical retail store survive in this increasingly digital world?  Has the idea of going to a store to rent movies already become entirely antiquated?  And what does Blu-Ray have to do with all of this?

Let’s start with the last question first.  You may remember the HD-DVD vs. Blu-ray digital media wars of a few years ago.  And if you were unsure of how that ended up, get outside more.  Chaffed at the loss of their pet HD-DVD format, or just being bluntly honest, UK Xbox Chief Stephen McGill was recently quoted as saying:

People have moved through from DVDs to digital downloads and digital streaming, so we offer full HD 1080p Blu-ray quality streaming instantly, no download, no delay. So, who needs Blu-ray?

To answer Mr. McGill’s question:  people who bought Blu-ray players need Blu-ray, Sony needs (read: wants) Blu-ray, and Blockbuster needed Blu-ray, but that’s about it.

First of all, Blu-ray does not equal 1080p, nor does it equal HD.  Blu-ray is a storage medium that just happens to have enough space for full HD movie files, but the fact that it is a physical format means it needs specialized hardware to play it.   To play a Blu-ray disc you need either a Blu-ray standalone player, a Blu-ray drive on your computer, or a Sony Ps3 (probably the best-selling Blu-ray player to date).  But do you need any of this to enjoy HD-quality video?  The answer is No.  Mr. McGill hit the nail on the head, whether there was a veiled attack on Sony (The Competition) intended or not.

Blockbuster was built on VHS tape rentals and made the transition easily to DVD rentals when the latter became popular, but the divide between a digital medium and DVD is much greater than VHS and DVD.  And Blockbuster, in classic “wtf is the internet and what is happening” big-company form, completely failed to evolve and keep up with modern technology.  A young, tech-savvy company by the name of Netflix basically came out of nowhere, first eschewing physical stores in favor of mail-order rentals, getting rid of late fees, and ultimately delivering full-HD streaming content, all over the course of 10 years or so.  What was Blockbuster doing to stay relevant during this time?  First they “got rid of” late fees, then got sued for misleading people based on their “no late fees” campaign, and ended up, you guessed it: pimping Blu-ray.  They stuck with their physical media until the end, and if credit has to be given anywhere, at least they stuck to what they knew.

The most interesting theme these two news stories both allude to are the idea of physical vs. digital, in this case both content and distribution.  A massive retail chain store with thousands of stores worldwide crumbled in the face of one company who understood the ideas of non-localized delivery and a digital platform for distribution.  When it all comes down to it, there is no difference between the Blu-ray copy of a movie and the streaming HD copy.  The content is exactly the same: both digital copies of the same movie.  The distribution is where the two things differ, and from manufacturing to your screen goes a little something like this:

Blu-ray:  Discs manufactured -> shipped to store -> go to store -> buy disc -> put in specialized player -> your screen

Streaming:   Server -> your screen

The internet is best as a content-delivery system, which is something the people at Netflix foresaw and profited from, and something the people at Blockbuster grossly misunderstood.  And this isn’t just about movies.  It can be applied to CDs and mp3s, games, and virtually any other media format.  Be on the watch out for what happens with network  TV in the coming year or so, as the recently-released Apple TV gains momentum, Google TV is released, and users take control of programming back from giant networks.

And lastly, don’t think I would end this without mentioning piracy.  The problem with digital content is that it can be copied easily and endlessly.  As soon as a Blu-ray disc is released (or even before), ripped copies begin to surface on filesharing sites where anybody can download and watch the movie for free.  Right now movie companies and many other industries are trying to figure out how to control this.  One of the methods is DRM-digital rights management.  This is not only a backwards way of thinking about managing digital content, but was also recently cracked wide open, proving once again that old ways of thinking and trying to control the new paradigm just won’t work.  Is it a coincidence that Blu-ray’s DRM was cracked and just a few days later the entire format’s existence was publicly questioned?  Blu-ray is a 20th-century answer to a 21st-century problem, DRM and all.

Categories: Science/Tech

ivi.tv Makes Life Overseas a Little More Like Home

September 23, 2010 1 comment

I just wanted to make a quick blog post today about a service I recently found out about. It’s called ivi.tv and they are a fairly new startup founded with the goal of bringing TV programming to the web. Channels like ABC, NBC, CBS, Fox, and more  are all there (no cable/satellite channels though of course) but if you live overseas and want to watch TV from America, a service like this could be a dream come true.

On the website they say that it is for America only, but I just put my address in as an American one and it accepted me without any problems. So far connectivity is great and I am able to stream all the channels they offer without any problems. My only issue with the software is that it won’t let you go completely full screen (the title bar from the window is still visible) but other than that it works great. They have a TV guide feature built right in, so you can see exactly what shows are on and what shows are coming up. It is also nice that they offer both East and West coast stations, so if you miss a show you have a chance to see it again a few hours later.

One catch though, the service isn’t free. They do offer a 30 day free trial, however, so if you’re on the fence you can give it a try first. Unfortunately, they do require you to enter a credit/debit card in order to gain access to the free trial and I’m not sure if non-American cards are accepted (I used my American debit card). But, at only $4.99 a month the price is right for regular broadcasting from America if you’re into TV. I am definitely looking forward to using it to watch some of my favorite shows and to watch football on the weekends, something I’ve been largely unable to do since I moved to Japan.

In the future it seems they are also planning on adding additional “pro” features (for a slightly additional fee) that adds DVR-like capabilities to the software so that you can record all your favorite shows without having to watch them live. Right now the only pro feature is the ability to pause live TV and well, for $1 extra per month I don’t even think that is worth it.

So, if you are living overseas, don’t have a TV, or would just rather watch television on your computer, give ivi.tv a try. Let’s just all hope they don’t get shut down anytime soon…

Categories: Entertainment, General

Sega and Namco Bandai Announce New Common IC Card System for Arcade Games

September 10, 2010 Leave a comment

A Japanese arcade in action Yesterday, Sega and Namco Bandai announced that they would be implementing a new IC card system for arcade games in the coming months that can be shared across all Sega and Namco Bandai games released using the system. That means one IC card for all your favorite games.

For the uninitiated, many arcade games in Japan allow for the (optional) use of an IC card sold there at the arcade (usually for about 500 yen) that can be used to save your game progress, stats, customize your character, and more. It’s a nifty little system, but I know my wallet is packed to the maximum with cards from all different kinds of games, some of which I never even play anymore. Sega and Namco Bandai are two of the biggest arcade game makers in Japan, so a common system that can be used between the two of them will definitely be a space (and money) saver if you are an avid arcade gamer in Japan.

Look for games using this new system at your local arcade in Japan sometime in the Fall of 2010.

(news via ITMedia)

Categories: Entertainment, Games

mixi Meetup 2010 -Social Leaders Conference- Coverage

September 10, 2010 Leave a comment

Today, Japan’s own SNS mixi held its “mixi Meetup 2010 -Social Leaders Conference-” and streamed the whole affair live on popular video streaming website USTREAM. At the conference they announced some new mixi features and revealed their new mixi platform that can be used outside of mixi itself by external websites. Let’s take a look at what was discussed during the conference and what it means for Japan’s social networking future.

The opening session was about the future of social media and in particular ideas for the future of social applications and SAPs (social application providers). Opening processions were spearheaded by mixi’s CEO Kenji Kasahara to get the ball rolling. The actual lead-off speaker for the event was Kristian Segersrtale, Vice President and General Manager of Playfish, Ltd. of micro-transaction based social gaming site playfish.com. He talked about the strategies needed for SAPs overseas to make their way into the Japanese market, and also discussed the challenges associated with the distribution and management of social applications when compared to traditional desktop applications. I think this is a great step forward for progress and innovation from Japan as they have been lagging behind the social media and social networking movements in recent years. Things are certainly starting to pick up with massive adoption rates for Twitter and mixi’s dynamic growth since its start way back in October of 2000.

The next session was conducted by the leaders of three major companies thriving on social application development and deployment in Japan: CyberX, Drecom, and Happy Elements. The tone of this discussion was one that I can’t really say I agreed with. The major theme was that Japan is a leader in the realm of social platforms and is on the forefront of innovation in the industry. I have to respectfully disagree here. mixi as a SNS platform has continued to copy ideas from overseas social networks throughout its evolution. Applications and an API to develop them for the mixi platform did not even exist until this year, much later than Facebook who arguably brought attention to the potential of social applications to the masses implemented them. Twitter has only recently become a popular method of social networking in Japan, and still is not utilized to its full potential or to the extent it is being used elsewhere in the world. Geolocation based social networking applications such as foursquare are being used, but they are merely applications for fun and are not being utilized by businesses to drive customers to their stores as they are in America and perhaps elsewhere. But, granted, this is a mixi sponsored event and I am sure they are not going to bash themselves or allow anyone else to either. That aside, overall the focus was again on the future of social applications (notice a pattern forming here?).

Now, on to the main event. The main session was dedicated to the announcement of mixi’s new platform and was titled, “From an Internet Generation to a Social Networking Generation.” The session was headed up by mixi’s top execs and discussed some of the new services that are being developed using mixi’s new platform as well as more of mixi’s vision of the future of social networking. Notably, as the title of the session suggests, much emphasis was put on how the internet is evolving from a mere database of information to a way for us as societies and individuals to connect in ways we have never been able to in the past. And, I must say while I am not a huge fan of mixi in general it is good to see that there are at least some people thinking about the future of the internet in a visionary way here in Japan.

Some notable points regarding mixi’s new platform are full integration with China’s renren.com and Korea’s cyworld social networking systems. Any games and applications developed for the mixi platform can be used and interactions can be made between all three of these social networks. As we noted previously on this blog, mixi is trying to break into the global market and this seems to be their first step towards that goal. During this discussion they also mentioned they are in talks with the top social networks in 60+ countries around the world with the goal of integrating their mixi platform with all of these networks to create a truly global social networking platform. It seems as though a lot of focus was put into social gaming, with one of the games introduced for their platform being a popular mixi application called “Machitsuku”, a micro-transaction browser based city simulation game (along the same lines of Farmville or any other Zynga game, basically). As these types of micro-transaction based social games do seem to be the trend these days, this was no surprise. But, it is a bit of a disappointment to me due to the fact that Facebook already does this, albeit better and with many times more the user base. I was hoping for a little more innovation when I heard they were announcing a “completely new social platform” but I suppose we will have to wait and see how it evolves.

Overall, the conference was pretty lackluster in my opinion. I am glad to see more of a movement towards social networking and social media in Japan, but I am still not impressed with any of the work mixi is doing at this point. If by some chance they can succeed in creating a unified global platform for social application providers then that will be quite an accomplishment, but I still have to say it is going to be difficult for them to break out of the Asian market and go global with Facebook’s ever growing presence looming overhead. Let’s hope though, that while it may not have been a driving step forward in innovation, this conference can act as a springboard for other small IT companies in Japan to work on new and imaginative ideas in the realm of social networking so that we may finally begin to see some true innovation come out of the almost stagnant IT industry here.

Categories: Japan, Science/Tech

HTML5 — What it is, what it means, and the future of the web.

September 9, 2010 Leave a comment

HTML5HTML5 — chances are, if you have been following any tech/web news for the past year or more you have heard about it. But, the question is… just what is HTML5 and what does it mean for the future of the web? Let’s start with a little bit of history.

Ever since the birth of the visual world wide web as we know it, web sites have been a fairly static form of media. Sure, there were a few fads that came and went: Java applets showcasing useless water ripple effects and laughably clunky menus being the most memorable for me, and some that stayed the course and went on to evolve into nearly ubiquitous status across the world wide web — the most notable of these being Flash.

Through the evolution of Flash we began to see more and more interactivity on the web, but there was a price to pay. First and foremost being the abuse of Flash by advertisers. Blinking, flashing, and just plain obstructive ads became commonplace and this led many users to not installing Flash or disabling it completely to get rid of them. Unfortunately, if you are a Flash developer or a company that uses a Flash based website, by requiring the Flash plug-in you have just potentially eliminated a score of visitors to your site. But, as the only viable way of adding a truly interactive and multimedia experience to the web it almost became a necessary sacrifice to create the content you wanted to convey to your audience.

As an insult to injury of sorts, as internet connection speeds increased the demand for streaming video on the web increased with it exponentially. Flash eventually became the method of choice for delivering this content, beating out earlier (terrible) technologies like RealPlayer. So now we have a situation where the web as we know it is severely limited without a third party plug-in, and that is where we stand to this day.

However, with the rise of smartphones and more and more problems with the Flash plug-in and its cross-platform performance and compatibility, a movement is growing and a new standard is being drafted to (possibly) replace Flash and other similar plug-ins like Microsoft’s Silverlight for good. This new standard is HTML5.

HTML5 is the next major revision of the ubiquitous HTML standard. And, even though the buzz around HTML5 has only started in the past year or so, work on the specification actually began way back in 2004 — ages ago in the tech world. In its current state (HTML5 is still a work in progress, and is expected to be for some years to come) HTML5 adds APIs to native HTML that give it many of the features you may find in Flash such as 2D graphics APIs through the <canvas> tag, timed video playback via the <video> tag, offline storage databases, drag and drop, and more. But, that’s not all HTML5 adds to the table.

HTML5 is also designed to enable web developers/designers to more naturally describe what the content on their pages actually represent. Tags for layout (which can all be styled via CSS, of course) such as <header>, <footer>, and <aside> were added, <nav> for navigation, and true support for all world languages with additions such as the <ruby> tag for adding ruby annotations used in East Asian typography to name just a few. Thus, the full potential of HTML5 comes from the combination of writing actual HTML5 markup combined with CSS3 for styling and Javascript for adding interactivity and dynamic events to these elements. This is why we are seeing such concentrated focus on improving Javascript performance from all major browsers, including adding native hardware acceleration to the browser for accelerated graphics display. Unfortunately, the performance is still not quite on par with Flash, but with time Javascript execution will almost certainly meet if not surpass that of Flash.

So, now that you know what HTML5 is… why should you care? Flash works right? Why do we need a replacement? If you have ever tried to visit a Flash website on your iPhone, Android, or other smartphone you will know that Flash doesn’t work on these devices. There is at least one mobile browser that I know of (SkyFire) that attempts to bring the full web experience (including Flash) to the mobile world, but from my experience using it on Windows Mobile I must say that the performance very much left something to desire.

However, I can run native HTML5 apps and games on my iPhone with great performance and usability using no third party plug-ins, browsers, or other software at all. It just works. And that’s the allure of HTML5. Write an app or a game in HTML5 and you can successfully market it to all platforms with access to a HTML5-compatible browser. That means, your app will run on the iPhone, iPad, Android, Symbian, desktops… you get my point. In fact, there are already two HTML5 based app frameworks designed specifically for mobile devices out in the wild, and I am sure there are even more in the works. One is called SproutCore and the other is Sencha Touch.

SproutCore is designed more with desktop-class application building in mind, but it seems like they are in the process of adding a subset of features for mobile, touchscreen based devices as well. However, Sencha Touch is being designed from the ground up as a web/HTML5 based platform for mobile devices, specifically modeled after the iPhone’s UI and design methodologies. I have toyed with both of these frameworks and hands down Sencha Touch is much more polished for developing on mobile devices (although SproutCore is very nice if you want to build desktop-like web applications).

Unfortunately, even though Sencha Touch is more polished it is still in beta and it shows. Many of the examples it distributes with are mighty impressive (click here if you are on a mobile device and want to see the demos first-hand) but they still lack in the responsiveness and smoothness found in native applications. Some of the operations, such as drag and drop, also seemed fairly buggy and Google Maps performance was very sluggish when scrolling and zooming for me on my iPhone 4. Overall, however, there is nothing stopping you from making a full-scale mobile application using the APIs and widgets it provides. I think that as time goes on and all the technologies involved are fleshed out, become more established, and mobile internet connection speeds increase (or wi-fi availability becomes ubiquitous) we will see a vast movement to applications and games being websites instead of actual natively compiled software packages.

In summary, HTML5 is a work in progress but it is already exhibiting an impressive feature set that rivals that of currently existing third party technologies while introducing even more revolutionary concepts to the world wide web. With the adoption of HTML5 we will begin to have access to our apps and data from any device, anywhere as the software that we run on those devices moves away from the devices themselves and onto the cloud. The internet as we know it is about to change, and change big… hopefully for the betterment of all its users.

Categories: Science/Tech
%d bloggers like this: