After five-plus years of smartphones saturating the market, it’s become clear that mobile device applications are an unqualified phenomenon, and a boon to application developers and app store vendors.

Apple recently reported that it is currently selling more than 1 billion mobile apps every month from the Apple Store; that’s an average rate of 23,148 apps per minute! The number of available apps is also increasing at an almost exponential rate. As the Apple marketing campaign goes, “there’s an app for that”, and not just on Apple’s app store: Google’s Andriod Marketplace, Microsoft’s Windows Marketplace for Mobile, RIM’s Blackberry App World, Symbian’s Horizon, and many others provide instant, downloadable applications and content that range from absolutely free, to thousands of dollars.

The problem is that the mobile devices on which these apps are being installed are increasingly used not only for phone calls and running localized apps, but for enterprise connectivity – and app stores don’t exactly promote the most secure activity for their users.

On one side of the equation, enterprise IT and security departments across both the public and private sectors have struggled with developing the “right” security policy for mobile devices. From policies that permit a bring-your-own-device (BYOD) approach, all the way down to enterprise-owned and enterprise-maintained devices, agencies and corporations must strike a balance between functionality, ownership and security.

On the other side, it’s in the best interests of app store developers (and critical to the success of their business models) to make it as easy as possible for users to download applications from their app repositories. On the surface, these two things – enterprise mobility and self-service app stores – may not seem like they’re particularly in opposition to each other. However, there’s a problem: the integrity of the applications within the app stores themselves.

Let’s say I’m a rogue developer, and I want to create software that will allow me to take control of mobile devices – including devices that connect to enterprise applications and services – and see if I can find anything from credit card data, to classified information. I’m probably not going to have much success getting people to install my app if I call it “Mobile Device Rootkit”.

But what happens if I build a simple game, attach my evil code to it, call it “Angry Birds 2” (implying it is the successor to “Angry Birds”, the most popular downloaded mobile game of all time), and post it up to an app store? Well, depending on who the app store is, I may – or may not – have my application evaluated by the app vendor before it’s made available, and in some cases, I may be able to easily circumvent the requirement for registering as an “official” developer. I can actually get users to voluntarily install my malware, and have them propagate it for me! And of course, I’m not just going to be able to get personal data (at least, not anymore).

Those BYOD policies will give me access to enterprise data over VPN and encrypted tunnels, e-mails, and other connectivity methods used by the mobile device. Today’s hacker checklist?: “Total Global Domination.” Done!

So, while app stores might be a phenomenon, they’re also a phenomenal headache for federal cybersecurity professionals. What can agencies do to minimize this kind of emerging threat?

The first line of defense is the development of a device usage policy that makes it clear what is – and isn’t – acceptable. There might be a complete ban on the use of personal mobile devices or may require a partial ban, with users with privileged access to mission critical systems prohibited from using personal mobile devices in the office.

You might decide on a middle ground where users can bring their own devices into the workplace, but that security is managed centrally by the agency, or bar users that don’t allow their device security to be managed from connecting to your infrastructure.

Perhaps you keep all communications to the enterprise running through a pristine “sandbox” such as mobile device virtualization, where even the most pernicious malware can’t touch it. Perhaps you utilize advanced biometric authentication to ensure that privileged device functions must be approved through a fingerprint scan before they’re executed. And of course, you’re continuously monitoring, reporting, and alerting on all connections from mobile devices into your environment, aren’t you?

But security of mobile devices can’t simply be a one-way street, where federal agencies and private enterprise are solely responsible for the integrity of applications and confidentiality of their data; there must also be a significant burden of responsibility falling on app store vendors. While the Apple App Store is know for ruling developers with a beautifully sculpted fist of iron, the Android Marketplace is more akin to the Wild West – where developers can upload apps without the same level of constraints that developers of iOS Apps face.

In an ideal world, the solution lies somewhere in between. App store vendors can still provide an open marketplace for apps, without having to exert absolute control over content. There also needs to be a much better app developer registration and vetting process that will reduce the likelihood of rogue developers and “fly-by-night” vendors. This will require the use of strong, multi-factor authentication using certificates, the ability for app owners to vet developers for things like reputation, geographic location, and other factors, and the ability to fast-track credential de-certification if informed of a data breach by a developer.

Developing an app ecosystem that reduces the chances of rogue developers creating and ‘selling’ malicious applications also requires the establishment of coding standards and libraries that are monitored, and enforced; providing a secure, preferred way to access and use APIs, not letting developers use “loophole” programmatic methods and making app security logging mandatory for every app will also help reduce the risks. My last recommendation is that app store owners have a periodic re-certification process for developers – if they don’t recertify, then they don’t publish new apps, and their previous apps are removed from the store and (if technically feasible) de-certified for execution on mobile devices on which they’ve already been installed.

Using a mobile device, much like driving, is a privilege and not a right. That privilege comes with certain responsibilities, many – but not all – of which are a burden of the user and the enterprise. But operators of app stores need to step up and recognize both their culpability in the dissemination of malware, and their accountability to fix the problems where they can. It’s only through effective security on everyone’s behalf that mobile devices – and the applications they run – will be safe to use at the enterprise level.

John Linkous is vice president, chief security and compliance officer eIQnetworks, Inc.