Mobile app security is a growing concern, but most companies are still struggling to keep up. Android has traditionally been the operating system associated with mobile security issues, but recent stats released by HP prove that iOS developers are also fighting an uphill battle. From Network World:
HP today said security testing it conducted on more than 2,000 Apple iOS mobile apps developed for commercial use by some 600 large companies in 50 countries showed that nine out of 10 had serious vulnerabilities.
Mike Armistead, HP vice president and general manager, said testing was done on apps from 22 iTunes App Store categories that are used for business-to-consumer or business-to-business purposes, such as banking or retailing. HP said 97% of these apps inappropriately accessed private information sources within a device, and 86% proved to be vulnerable to attacks such as SQL injection. …
In its summary of the testing, HP said 86% of the apps tested lacked the means to protect themselves from common exploits, such as misuse of encrypted data, cross-site scripting and insecure transmission of data.
Some apps didn’t properly encrypt data and some didn’t implement HTTPS correctly, meaning hackers could easily compromise apps or access and leak/exploit private information. Some apps had security measures built in but not used as the development process progressed – meaning the ultimate product wasn’t as secure as it could – and should – have been.
Poor security isn’t only bad for users, it’s detrimental to a brand’s reputation and might even put a company’s propriety information at risk. Don’t gamble with something as important as app security. Learn what you need to know in this free whitepaper on Mobile App Security Testing.
There’s an old saying in the military that “generals always fight the last war, especially if they have won it.” Quite simply it means that when preparing for the next kind of threat, one should resist looking too closely at past wars fought at different times with different technologies and circumstances. The French were overrun despite the Maginot Line – the perfect defense against further aggression by the Germany of World War I, but practically useless against the German Blitzkrieg of World War II.
So it is with app security. Early in the era of web security, there was still a strong fear of classic desktop vulnerabilities like stack smashing and buffer overflows. Now that we’re in the era of mobile apps, the security world is still stuck in the realm of web security. Despite the fact that it’s been over two years since the OWASP group released the top 10 vulnerabilities for mobile, few mobile developers think to security test their apps.
Last week, researchers revealed yet another vulnerability type for iOS applications. By using a man-in-the-middle approach, an attacker can trick an iOS app into communicating with a web API on a malicious URL, not just once but forever after the fact. Ars Technica has the details of the attack, discovered by the security group Skycure, but the summary is simple.
As it turns out, many iOS apps do not have strong protections against malicious 301 redirects for API calls. A 301 redirect is basically a web command that says, “This thing isn’t here anymore. It’s over there instead.” Web masters use it when moving content from one place to another so that links work smoothly even when the content address has changed. But if I can interfere with your app’s communications and say “your API is really over here on my malicious server,” then I can theoretically intercept or even modify content used by your app. Because the 301 code signals a permanent redirect, your app will continue to use my malicious server long after I have stopped directly interfering.
The fix is simple. Make sure your app uses SSL instead of communicating in plaintext. It’s odd to think that in this day and age anyone would use an unencrypted API, but here’s the perfect reason to do so. By going the extra step to encrypt your API’s traffic, you’re also making sure that your users only see the right content and not something malicious or bogus. SSL certificates are cheap compared to the cost of your app losing its reputation because of security problems.
Security testing is difficult and requires constant upkeep as hackers and attackers find new vulnerabilities. At uTest, we have a basic list of six key security features all applications should be tested for at a minimum:
- Confidentiality: Does the app keep private data private?
- Integrity: Can the data be trusted and verified?
- Authentication: Does the app check to see if you are who you say you are?
- Authorization: Does the app properly limit privileges?
- Availability: Can an attacker take the app offline?
- Non-Repudiation: Does the app keep a record of events for later verification?
When it comes to mobile app security testing, things may seem simpler – a hacker probably can’t infect your app with malicious code that will compromise users – but in reality there are just as many things to consider as ever. Many of the six factors listed above hold true for any type of application – it’s all about keeping data safe and private. Michelle Drolet, founder of Towerwall, elaborated on this point in a recent article for Network World. Here are her “tips for testing vulnerabilities:”
There are many potential weak spots in mobile apps. Knowing where they are can get you off to a good start.
- Data flow — Can you establish an audit trail for data, what goes where, is data in transit protected, and who has access to it?
- Data storage — Where is data stored, and is it encrypted? Cloud solutions can be a weak link for data security.Data leakage — Is data leaking to log files, or out through notifications?
- Authentication — When and where are users challenged to authenticate, how are they authorized, and can you track password and IDs in the system?
- Server-side controls — Don’t focus on the client side and assume that the back end is secure.
- Points of entry — Are all potential client-side routes into the application being validated?
This is only the tip of the iceberg in terms of comprehensive security testing for mobile apps. Factor in the peculiar demands of compliance in your industry, because it is vital that you meet the right standards for regulations and mandates.
Read the full article at Network World >>>
Check out uTest’s whitepaper for more tips on mobile app security testing. It explains the six senstive areas uTest highlights and expands on many of the same points Michelle made in the Network World article.
A recent app analysis by security specialist Veracode found that “91% of the top mobile apps unnecessarily expose a user’s personally identifiable information.” From Info Security:
Mobile cyber-threats are increasingly on the rise, not only in the form of malware but also just lax security guards within applications. Veracode conducted an analysis of the most popular mobile applications used within enterprises and found that many of these apps access confidential and sometimes personal data on the mobile device and expose sensitive information to unknown parties.
Clearly, this is going to be a growing issue as BYOD continues to expand. When users download personal apps that aren’t vigorously tested and vetted it puts all their data – personal and corporate – at risk. Short of banning BYOD there’s little companies can do to stop this unless mobile app security standards become more rigid.
If you’re looking to learn about app security, check out this Mobile App Security Testing whitepaper which covers topics like key vulnerabilities, areas to consider and test and security features that are important to users.
Mobile security attacks are more common than you think, and are occurring worldwide on a variety of devices, operating systems and carriers. A recent infographic, posted on Mobile Marketing Watch, breaks down the likelihood of attacks by type of threat and where they occur. Here’s a look:
According to a survey conducted by Sonatype, open source components are a major factor in application creation, but their security leaves something to be desired. Sonatype surveyed 3,500 developers, architects and managers from 50 countries about their use of open source components and their approaches to the security of those components. Respondents include people from Netflix, LinkedIn, Facebook, Disney, GE, Ebay and other major companies (25% of respondents come from organizations with more than 500 developers).
Overall, the study found that we’re getting better at paying attention to the security of open source components, but we still have a long way to go.
- 57% of companies don’t have an official policy regarding the use of open source
- 32% said there are no standards regarding open source use – each developer/team chooses their own components
- When there is an open source policy, it’s not always enforced (31% said that the biggest challenge with their company’s policy is that there is no enforcement)
- 47% of policies say that developers must avoid known vulnerabilities. Only 24% make developers prove they are not using components with known vulnerabilities
- 61% of developers dismiss or overlook security for one of three reasons – they believe security is not their responsibility, they know it’s important but feel they don’t have time to address it, or it’s simply something they’re not focused on
- 45% of organizations don’t keep an inventory of the open source components they use
Get more stats from Sonatype >>>
In the end, if your application presents a vulnerability or security issue, your users will blame you. They don’t care if the problem is with someone else’s code, they expect you to take the time to make your app secure, no matter how it was developed. So know the open source components you’re using, pay attention to known vulnerabilities and dedicate some time to security testing your mobile app to make sure the entire thing is a secure as possible.
We’ve heard enough depressing mobile security and malware news, so let’s take an “up-side” look at some recent research on the subject. Here’s how it went down, from Dark Reading:
Researchers from Northwestern University and North Carolina State University for one year tested popular mobile AV apps for Android on their ability to detect malware that uses evasion techniques, such as changing up the code or morphing a malware sample. Polymorphism can be as simple as changing the order of the code and data files or just renaming the file, or as complex as changing the appearance of the code but not its behavior.
Guess what, malware is still an issue and mobile antivirus software still doesn’t stop all the threats, blah blah blah, we know how that story goes. Instead of rehashing those issues, let’s focus on the positive: According to the study, many antivirus software options are getting better at detecting certain malware techniques.
The good news is that the tools appear to be getting better at detecting malware that uses basic transformation/obfuscation techniques, such as repacking or reassembling the malware, via unzip or rezip, for example. These methods don’t change the code, just the packaging. In 2012, 45 percent of the AV signatures failed to detect malware that used such basic transformation techniques, but this year only 16 percent of them have missed “trivially” transformed malware samples so far, the researchers say.
In the ever-changing threat landscape of mobile security, progressive is impressive. Companies have to stop the little things so they can focus time and energy on addressing larger, more complex challenges. The next challenge, the research suggests, is detecting malware that disguises itself by changing its code.
“The result that we have here certainly indicates improvement: Anti-malware tools do not succumb as frequently to such trivial transformations. However, this is far from good. As long as anti-malware tools continue to use content-based signatures, evading them is really easy,” Chen says.
Today’s mobile AV signatures are based on byte patterns in the malware, and malware writers can easily evade AV tools by changing those bytes, according to the researchers. Some 90 percent of the malware signatures studied by the researchers don’t use static analysis of the byte-level code. Dr. Web was the only AV product employing static analysis, they say.
When you think about bad mobile app security, Android tends to come to mind. The open nature of Android makes it (theoretically) easier for malicious apps to find their way into the app store and onto users’ devices. While intentionally malicious apps may be a problem for Android, when it comes to data leaks and the loss of personal information iOS is actually a bigger security offender, according to Veracode’s recent State of Software Security report. From Computer Weekly:
Surprisingly, 26% of Android apps exhibited information leakage bugs, compared with 42% on iOS. This covers the leakage of personal information such as email, text messages, GPS coordinates, and the content of users’ address books.
“When you install Android, it requests access to certain phone functionality. The app developer has to request explicit access, while on iOS a developer does not have to request access,” said [Chris Eng, vice-president of research at Veracode].
Even when developers take the extra steps to make their apps secure, their approaches may be miss guided. Trying to build in cryptographic keys to protect user data can actually make security worse if not done correctly. This issue is troubling for both major operating systems.
Overall, cryptographic issues affected a sizeable portion of Android (64%) and iOS (58%) applications.
The report warned that using cryptographic mechanisms incorrectly can make it easier for attackers to compromise the application. Cryptographic keys are used to protect transmitted or stored data.
It found that in some applications, developers had hard-coded a cryptographic key directly into a mobile application. Should these hard-coded keys be compromised, any security mechanisms that depend on the privacy of the keys are rendered ineffective.
Mobile app security is complicated. Developers and testers need to keep working to understand the issues and learn how to best address them.
In the business world, a year-over-year growth rate of 163% percent is cause for celebration. In the world of mobile malware, a 163% growth rate is cause for consternation. If we are to believe a recent study from Mobile service provider NQ Mobile, that’s the present situation for the Android operating system – and it’s probably going to get worse.
Here’s TechCrunch with the story:
Trends indicate we’ll only see more attacks, and more creative ones, according to NQ. In February, security researchers identified a new type of malware that uses an Android device as a launch platform for infecting a target computer via USB connection, the company said. That remains limited to only a few identified infected handsets, but it’s a troubling attack vector that could pose plenty of problems down the road if it becomes more sophisticated. In a release, NQ Mobile co-CEO Omar Khans said that what’s needed is a system that can detect threats in advance of infection and prevent them, something which so far hasn’t really been widely available.
NQ Mobile’s report found that more than 32.8 million Android devices were infected over the course of 2012, up more than 200 percent from 2011. Of course, the general Android device population grew massively over the course of the year – a recent ABI Research study indicates that there will be over 798 million active Android devices by the end of the year, compared to around 300 million as of early in 2012. And the U.S., despite having a large chunk of the overall user population, is actually further down the list in terms of target countries, with just 9.8 percent of infected devices, compared to 25.5 percent in China, 19.4 percent in India and 17.9 percent in Russia.