Time for an alternative to PHP?

In this post, a polyglot dev (PHP included) says we need an alternative to PHP. By which he means a widely used replacement for PHP for web applications and web sites. The idea being that, if you get a simpler, more consistent, secure web app/site language that has built-in support for fun new technologies and techniques like HTTP/2, WebSockets, unikernels and concurrency/async primitives, they will come. “They” in this case being developers who wouldn’t normally know how to write good code in PHP and will magically do so in a language that makes it easier to do so.

The post concludes with no clear recommendation of an existing language, nor even “X language plus Y features”. And, more importantly perhaps, the post doesn’t tackle how that language would rise to fame; remember that we’re talking about an alternative that can take its place as the lingua franca of server-side programming in a web application context, ostensibly by providing killer apps for both new development and cross-language refactors  I’ll come back to this omission in a minute. First, let’s do over the stated objections to PHP as it stands. Read the rest of this entry »

PHP Versions and Modernizing Legacy Applications

Last weekend I attended SunshinePHP (it was a blast; you should go next year if you didn’t this year…or if you did, for that matter). Friday night, there was a panel on minimum PHP versions, with an eye to raising the bar to something in recent, non-end-of-life history rather than allowing versions that won’t get security fixes anymore. The battle cry there was one of pushing hosts, devs, sysadmins and communities in general to newer versions (5.5, 5.6, and 7 late this year) in the name of better speed, better security, and a much happier environment for developers.

This battle cry was mixed with the explanations of some panel members on why their packages still support PHP 5.2 and 5.3 (remember, both now no longer get security fixes), with remonstrances that increasing a version requirement on CMS-centric frameworks like CodeIgniter, or CMSes themselves like WordPress and PyroCMS would end up stranding user bases on unsupported, vulnerable software if they increased their minimum version requirement to something reasonable, rather than getting those devs and end users on a supported, more dev-friendly version of the runtime. For full-stack frameworks, and given the proliferation of, and ease of migration to, 5.4+ hosts, I find this unconscionable, for reasons stated eloquently by Anthony Ferrara.

But another member of the panel also supports PHP 5.3 with his libraries: Paul M. Jones with the AuraPHP project. Why am I not railing against this…and the fact that the Aura v2 libraries actually downgraded their version requirements relative to Aura v1? Paul mentioned that the effort to allow 5.3 compatibility was quite low (remove short array syntax, remove callable typehints), but there’s a better reason: Aura libs can be used to modernize applications and serve as a bridge to current versions…and you want to put the other end of the bridge where those apps are sitting right now. Read the rest of this entry »

Tags: , , ,

A rebuttal to the rebuff of the latest XBox One DRM decision

According to this article, Microsoft has switched feet on its foot-shooting escapade that is the XBox One. The short version of the story: Microsoft decided to roll back truly heinous DRM on its games, but in return users are giving up features that make the console a generation ahead of the PS4. Or something like that. Yeah…no.

The reason: if you want to lace a disc with heavy DRM that enables you to use the game in a not-disc way, you’re doing it wrong. As long as folks own physical media (and, like it or not, they own the metal and plastic wafer that the game is printed on to), they’ll have in their mind the concept of ownership. Right of first sale and all that. Which is why Sony’s 22-second “how to share a PS4 game” struck such a chord with folks.

Now let’s look at the downloadable game side of things. The expectation of playability anywhere is there, but is tempered by an expectation of DRM. Anyone who has downloaded a PC game from the likes of Amazon or EA Origin has seen this; you can pull the game however many times, but don’t expect to play it simultaneously on two different machines. Just like with a disc in a drive, it won’t work. Which makes sense…you’ve got to protect those bits somehow.

This begs the question of whether a game should be available in both disc and download formats, each with its own DRM scheme? My answer: absolutely. Build what the customer expects into the disc, and what you think the customer might want into the download. The author mentions that he has a fast, reliable ‘net connection. That’s great; that means you can buy a downloadable game and skip the disc once and for all.

The point of physical media (which can pack 20+ GB of content onto a single Blu-ray disc) at this point is to provide a fast-loading alternative to the slow average connection speed of Internet users at large. For them, downloading entire game is an ordeal, particularly if their connection is capped, throttled or slow all the time (not all of us have Google Fiber, FiOS or even Comcast available). And their friends may be in the same boat, so schlepping a disc from point A to point B isn’t a big deal, but dealing with on-disc DRM is. You don’t want another SimCity, do you?

tl;dr: Customers have spoken, and Microsoft did the right thing by rolling back its physical disc DRM. If you want more features at the expense of DRM, there is a solution: downloadable games (which should be doable with every single game). Locking down physical media isn’t.

Tags: , , , , , , , , ,

The New MacBook Airs

My most recent tech purchase over $500 was a computer. Specifically an HP Envy x2. One of the reasons: amazing battery life. Twelve hours or so. The catch: the darned thing pokes along due to an Atom Z2760 CPU. But it’s also $580 so that’s forgivable.

My workhorse notebook is an early 2009 MacBook…with a few upgrades. It’s got the 2GHz Core Duo CPU and nVidia 9400M graphics…backed up with 8GB of RAM and a 256GB Crucial m4 SSD. It’s not the speediest machine out there, and I can’t seem to find a decent replacement battery so I can only get three hours or so away from an outlet, but with the RAM and disk upgrades it’s actually reasonably fun to use.

Why did I just bring up two pieces of old/low-end equipment that have nothing to do with the current MacBook Airs announced a couple hours ago, other than screen size? Because replacing both with a 13-inch Air isn’t out of the question for me…later this year, once the newest OS X edition comes out. That said, there are a few specs that got glossed over during the presentation today, amid all the talk about power efficiency (nine hours on a charge for an 11-inch machine, or twelve hours on a 13-inch, is just excellent). Stuff like CPU speed and upgrade costs.

Read the rest of this entry »

Tags: ,

Quick Thoughts on Google I/O Day One

This is going to be a bit of a rapid-fire, non-exhaustive list, but…

  1. Having an IDE other than Eclipse for Android dev makes me want to pick up the platform again. JetBrains, the makers of the IntelliJ IDE on which the new Android IDE is based, is a solid outfit (I use one of their other IDEs relatively regularly).
  2. I’m not buying a Galaxy S4 “Nexus Edition”. My S III is just fine, and the S4, in addition to being expensive, has the same problem that the Nexus 4 has: I can’t get 4G where I need it because Sprint is the only carrier that can do that.
  3. I should have gone to I/O. I wouldn’t pay full rack rate for the S4 Developer Edition or the Chromebook Pixel (though I’ve thought about the latter), but I would certainly use the heck out of said devices if they were included in the price of admission.
  4. Watch out, PayPal. Google isn’t the first to do person to person money transfers, but if you’ve got a Google Play account and Google has opened up the new “attach money” feature to you, the amount of effort required to send money to someone else is ridiculously low.
  5. The new Hangouts isn’t the first time Google has done photo sharing through chat (and the makers of Hello did a really good job with the app, speaking from personal experience). It’s been awhile though.
  6. Speaking of Hangouts, the fact that the service has been pushed in the direction of a persistent chat room with video calling et al as a situational add-on is…well…the way it should be.
  7. Per-minute billing (with either one-hour or ten-minute minimums) on Google’s IaaS compute offering is really cool. Nice to see Amazon one-upped at their own game, at least in this small way, and I’m sure that this will make sites that see serious traffic spikes for smallish periods take note of Google’s offering. Until its competitors implement the same thing, of course.
  8. The new Maps looks epic. If only I could actually use it.
  9. I want a H.264 (AVCHD) -> VP9 encoder (CLI is fine…integration into Handbrake is a nice bonus) yesterday. Or a whatever -> VP9 encoder, for that matter. I also want to know how VP9 compares to H.265 (is it inferior like VP8 is compared to H.264, or is it pretty comparable?)
  10. I, for one, welcome our new voice search enabled, auto-image-enhancing, auto-hash-tagging overlords. The competition is a click away, but they just aren’t up to snuff compared to Google in so many of these areas.

Tags: , , , , , , ,

Internet QoS Sucks: why modern browsers use parallel connections

Earlier tonight (using “tonight” loosely) I attended a meetup that hosted an excellent presentation about scaling AngularJS applications, by a guy who obviously knows what he’s talking about. But this post is, more or less, not about that.

It’s about a comment that I made, in response to a question fielded by a co-attendee of the meeting. The question went something like this: “Why is there a performance gain in delivering multiple code files over the wire to the end user’s browser, versus just one, when you’re going from one server to one client? Shouldn’t a single transfer just max out their connection anyway?” My response: “[incomprehensible mumbling] Basically, Internet QoS sucks.”

That’s an oversimplification, but not as far from the truth as you’d expect. Read the rest of this entry »

Tags: , , , , ,

CORS in an API?

I had a question a few days ago, and am going to bring it up at this month’s Austin API meetup: should you use CORS in an API? I suppose that that leads into another question: should your API be built to be used by an application running from someone’s browser that is served on a domain other than your own? Read the rest of this entry »

Tags: , , ,