These are quick first looks and trend and threats


Read More >>
Written by the security and AV professionals from team K7, meant for the general audience
Read More >>
These are usually articles that go into internals of a virus or deal with security issues
Read More >>
Senior managers speak on areas of interest to them, inside and outside the industry
Read More >>

Archive for the ‘Internet’ Category

Shell Team Six:Zero Day After-Party (Part III)

Monday, February 23rd, 2015

This is the third part of a six-part blog based on the paper submitted by my colleague Gregory and myself on Advanced Persistent Threats (APT), for AVAR 2014.

Continuing from the second part of our paper…

Exploiting Popular Applications

Popular applications such as web browsers, word processors, etc. in an attempt to provide rich functionality, at times fail to handle untrusted data properly. The attackers probe these applications with a variety of mechanisms such as fuzzing, reverse-engineering, study of any stolen code, etc. in order to discover bugs that allow them to execute malicious code without any user interaction.

Lack of buffer boundary checks in the application’s code is exploited, critical memory area is over written to hijack the control flow of the program and  execute the attacker’s shell code.

Likewise, bugs in handling multiple references to the same object have lead to Use-After-Free class of vulnerabilities which after seeding memory areas with malicious code can be exploited to execute the attacker’s shell code.

Data Execution Prevention (DEP) Bypass

DEP is a security feature provided by the operating system to thwart buffer overflow attacks that store and execute malicious code from a non-executable memory location. The OS leverages the No-eXecute technology in modern day CPUs to enforce hardware assisted DEP that prevents memory areas without explicit execute-privilege from executing. Attempts to transfer control to an instruction in a memory page without execute-privilege will generate an access fault, thereby rendering the attack ineffective.

Bypassing the DEP feature in a process involves locating already existing pieces of executable code from process memory space and manipulating them to use attacker controlled data to achieve arbitrary code execution. This is accomplished using one of the following techniques:

  • Return-to-libc
  • Branch Oriented Programming (BOP)
    • Return Oriented Programming (ROP)
    • Jump Oriented Programming (JOP)

Return-to-libc

This evasion technique involves replacing the return address on the call stack with that of an existing routine in a loaded binary. The parameters/arguments that are passed to such routines are controlled by the exploit data strategically placed on the stack.  A system function like WinExec() can be invoked to load and run a malicious component without running non-executable exploit data.


Fig.6: The stack layout when using return-to-libc attack to invoke system() in GNU Linux (32-bit).

Branch Oriented Programming

This bypassing method involves an attacker gaining control of the call stack and executing carefully stitched pieces of executable code called “gadgets”. These gadgets contain one or two instructions which typically end in a return instruction (ROP) or a jump instruction (JOP) and are located in a subroutine within an existing program or a shared library. Chained together, these gadgets allow an attacker to perform arbitrary operations on a machine.

Fig.7: ROP gadget execution sequence based on exploit controlled stack layout

Address Space Layout Randomization (ASLR) Bypass

In order to thwart BOP attacks, the concept of randomizing executable code locations, by randomizing the base address of the loaded binary, on every system reboot was introduced. This security measure known as ASLR made it difficult for the attacker to predict where the required gadget sequence resides in memory. However, APTs have been observed bypassing this protection using the following techniques:

Loading Non-ASLR modules

Dynamic-Link Libraries compiled without the dynamic-base option cannot take advantage of the protection offered by ASLR and as a result, are usually loaded at a fixed memory space. For example, Microsoft’s MSVCR71.DLL shipped with Java Runtime Environment 1.6 is usually loaded at a fixed address in the context of Internet Explorer making it easy to construct the required gadget chain in memory.

Fig.8: An ASLR incompatible version of MSVCR71.dll

DLL Base Address calculation via Memory Address Leakage

This technique involves determining the base address of any loaded ASLR-compatible DLL based on any leaked address of a memory variable or API within that DLL. Based on the address of this known entity, the relative addresses of all the required gadgets can be calculated and a ROP attack constructed.

Attack techniques such as modifying the BSTR length or null termination allows access to memory areas outside the original boundaries, leading to the memory address of known items being revealed to the exploit code. This can then be used to pinpoint the DLL’s location to use ROP gadgets within it. Array() object also has a length component that can be overwritten to leak memory addresses beyond its bounds.

Browser Security Bypass

Leveraging the operating system’s security, popular web browsers run certain parts of their code, JavaScript execution and HTML rendering for example, as a sandboxed background process. This process runs with limited privileges and has restricted access to the file system, network, etc.  A master controller acting as an intermediary interacts with the user and manages these sandboxed processes. By using this master-slave architecture and providing a controlled environment, users are protected from exploit attempts by limiting a shell code’s capability to access host system resources and confining its damage to within the sandbox.

Since these browsers rely on the operating system’s security model, exploiting unpatched kernel vulnerabilities will result in the malicious code escaping its confined environment. The infamous Duqu malware relied on vulnerability (CVE-2011-3402) in the Win32k.sys driver that improperly handles specially crafted True Type Font (TTF) files. This allowed the malware to escape a user-mode sandboxed environment implemented by the Microsoft Word process and compromise the host.

Fig.9: Vulnerable code snippet from win32k.sys that lead to the Duqu TTF exploit

Enhanced Mitigation Experience Toolkit (EMET) Bypass

EMET is a Microsoft tool that provides additional security to commonly-exploited third-party applications such as web browsers, word processors, etc. It extends the operating system’s protection mechanisms to these vulnerable applications and makes exploitation attempts extremely difficult.

The following table lists the protections offered by EMET and known bypassing techniques [4]:

Click here to read the fourth part of this blog

References:
[4] http://bromiumlabs.files.wordpress.com/2014/02/bypassing-emet-4-1.pdf
[5] http://0xdabbad00.com/wp-content/uploads/2013/11/emet_4_1_uncovered.pdf

Lokesh Kumar
K7 Threat Control Lab

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:
http://blog.k7computing.com/feed/

Shell Team Six:Zero Day After-Party (Part II)

Wednesday, February 11th, 2015

This is the second part of a six-part blog based on the paper submitted by my colleague Gregory and myself on Advanced Persistent Threats (APT), for AVAR 2014.

Continuing from the first part of our paper

Initial Compromise

Armed with information obtained from the previous stage, the perpetrators may adopt several techniques to sneak into the organization. Traditional attacks involve actively targeting vulnerable applications and exploiting Internet facing resources like webservers, SQL servers, FTP servers, etc. As log analysis and security around these external resources have caught on, the attackers have had to evolve their tactics in order to be successful.

Infiltration Methodology

The attackers now target the most vulnerable element of any organization – the human. Social engineering tactics are used to entice an individual or a group of users into running code, which will allow the attackers to introduce their malware into the organization’s network. The most commonly used attack techniques are:

  • Spear Phishing
  • Watering Hole

Spear Phishing

Spear phishing involves the attacker compromising a machine by sending a well-crafted email to a targeted user and convincing him/her to:

  • Open an embedded link that points to a website loaded with zero-day exploits, or
  • Open a malicious attachment (EXE, PDF, DOCX etc.)

both of which exploit the rendering application to drop or download, and execute a payload with backdoor capabilities

Watering Hole

 

Watering hole attack involves the attacker placing exploits, possibly zero-day in nature, on a trusted website which is frequented by the users of the organization.  When a targeted user visits the site, the exploit code is automatically invoked and the malware installed on his/her machine.

Case Study

The U.S. Veterans of Foreign Wars’ website was recently compromised to serve a zero-day exploit (CVE-2014-0322). A similar watering hole attack exploiting zero-day vulnerabilities has occurred in the past targeting a specific group of people by compromising the website of the Council for Foreign Relations.

Fig.2 shows publicly available website access logs of users along with their non-routable IP addresses. This information can be used to evaluate the browsing habits of individuals in the company and eventually to execute a watering hole attack.


 
Fig.2: Publicly available map of internal IP addresses and their website logs

Security Bypassing

Email attachments, file downloads, HTTP requests, etc. originating from users undergo rigorous checks at various layers that include:

  • Network/Gateway layer scanners
    • Email/File/URL scanners
    • Sandboxed file analysis
  • Endpoint/Desktop layer scanner
    • Anti-Virus/HIPS/firewall
    • Application security features
    • Operating system security features

Once the human element falls prey to social engineering, and is coaxed into downloading a file/email or visiting an exploit site, the attackers are faced with challenge of defeating a series of network and end point security solutions before conquering the victim’s machine. Listed below are some of the tactics used by the perpetrators to bypass these layers of security.

Attachment Archive File Format Abuse

Discrepancies in the way in which a security product handles a compressed file versus that of an un-archiving application has led to abuse of the popular ZIP file format.  Un-archiving apps identify ZIP file types by scanning the last 64KB of the file for a special magic marker. Security scanners on the other hand, with a need for speed, identify the file type by inspecting only the first few bytes from the beginning of the file.

An attacker abuses this disparity by creating a malicious ZIP file and manipulating its headers by adding junk data at the beginning of the ZIP file. This specially crafted file deceives security scanners into thinking that it is of an unknown type and escapes detection, but un-archiving applications are able to successfully extract the malicious code at the end point.

Fig.3 shows a Proof-of-Concept [2] archive file that is capable of evading security scanners

Fig.3: Crafted ZIP file with NULL data prefixed.

Gateway Sandboxing Bypass

Suspicious files that match certain criteria are typically executed within a sandboxed environment for a short period of time. Depending on their behavior, the files are either blocked from the user or released to him/her.

Attackers can craft malicious files which detect such controlled settings by looking for specific registry keys, in-memory code changes, mouse pointer movement, etc.

For example if the malicious file identifies that it is being executed in a sandboxed environment, it stays idle without performing any activity thereby bypassing this check. The Up-Clicker Trojan [3] attempts to evade sandbox analysis by staying idle and waiting for a mouse click before activating itself.

Fig.4: Code showing Up-Clicker Trojan set to activate on mouse click

Browser Multi-Purpose Internet Mail Extensions (MIME) Sniffing

This attack exploits differences in the way in which security scanners and web browsers identify the content returned by an HTTP server.

Security scanners parse the magic headers available at the beginning of a file returned by the web server, to identify the file type. This means that a specially crafted malicious HTML file containing the magic marker commonly found in a GIF image will be identified by the scanner as an image file, exempted from scanning and let through into the network.

Web browsers on the other hand, depend on the MIME type in the HTTP response header returned by the web server to identify the file type. When this information is absent as is the case of a response from an attacker controlled web server, the web browser resorts to content sniffing to determine the MIME type. So, the same malicious HTML containing the GIF magic marker will now be identified as HTML content by the user’s browser and rendered accurately to execute the exploit code.

Fig.5: Malicious script containing bogus RAR and GIF magic markers.

Click here to read the third part of this blog

References:
[2] http://www.reversinglabs.com/news/vulnerability/reversinglabs-vulnerability-advisories.html
[3] http://www.infosecurity-magazine.com/news/trojan-upclicker-ties-malware-to-the-mouse

Lokesh Kumar
K7 Threat Control Lab

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:
http://blog.k7computing.com/feed/

Shell Team Six:Zero Day After-Party (Part I)

Wednesday, January 21st, 2015

This is the first part of a six-part blog based on the paper submitted by my colleague Gregory and myself on Advanced Persistent Threats (APT), for AVAR 2014. This first part introduces the reader to the different phases of an APT and discusses the methodology, prevention and detection techniques of the initial phase of an attack in detail.

The IT security industry is faced with the challenge of dealing with old invasion tactics that have been reborn in new avatars as Advanced Persistent Threats (APTs). APT attacks are tenacious at pursuing their targets and are played out in stages, possibly over a long period of time. With financial backing from state actors and criminal rings, APTs tend to be compound, sophisticated and difficult to detect. Each facet of the intrusion, in an idealist scenario, may be refined to such an extent that the end goal is achieved without a trace before, during or after the event.

Despite the complexity of these types of attacks, certain parameters always need to be satisfied to deliver the payload and retrieve the expected results, leading to the emergence of an attack pattern which may be placed under the microscope and flagged. These parameters include executing arbitrary code by invoking zero-day exploits for popular software, defeating security measures such as DEP & ASLR, e.g. via heap spray and ROP/JOP chains, exploiting EoP vulnerabilities, establishing remote C&C communication channels to issue commands or to exfiltrate stolen data in encrypted form, etc.

Drawing on evidence from documented real-world case studies, this paper details techniques that assist an assailant during the different phases of an APT, bypassing protection mechanisms like application-sandboxing, EMET, IDS, etc. thus attempting to fly under the defense radar at all times. Equipped with this information, we hope to explore methods of discovering each part of the life-cycle of a targeted attack as it is in progress or in the post mortem, thus reducing their efficacy and impact.

Introduction

“If you know your enemies and know yourself, you will not be imperiled in a hundred battles… if you do not know your enemies nor yourself, you will be imperiled in every single battle.” Sun Tzu

As technologies implemented in organizations are becoming advanced, the threats are rapidly evolving too. Through tenacious and coordinated attacks on one’s infrastructure, APTs are able to infiltrate and overwhelm the organization.

The threat landscape has changed. But the general principles of war remain the same.  Knowing the modus-operandi of your faceless attackers helps one evaluate, and harden one’s security measures, and gear up towards facing the attackers head on.  This paper aims to help you do just that.

APT Life-Cycle

The stages of an APT can broadly be classified as follows:

•   Target reconnaissance
•   Initial compromise
•   Expanding access and strengthening foothold
•   Data exfiltration and cleanup

 

 Target Reconnaissance

The reconnaissance phase of a targeted attack sets the stage for the rest of the threat campaign and therefore involves a high degree of planning. The perpetrators spend significant amounts of time learning about their target, collecting as much information as possible about the human, physical and virtual resources of the organization. The intelligence garnered during this stage not only helps the assailants determine key points of entry into the target network but also empowers them to navigate the victim’s network once inside more effectively & efficiently.

Reconnaissance Methodology

The target’s virtual network is plotted using publicly available resources. These resources include:

•   DNS records
•   WHOIS information
•   Email messages
•   Inadequately protected network logs
•   Misconfigured servers, etc.

The organizational structure is also studied to determine employees and their organizational access levels, using social media, search engines and the target’s own website. Profile intelligence gathered could include potential passwords, personal and official email addresses, whether the user is a regular employee, a SOHO user, or a contractor.

Based on this harvested intelligence the infrastructure needed for the attack will be acquired, the course of action to successfully execute the campaign will be determined & evasion techniques that could be followed during the attack will be planned. New domains may be registered, command and control servers set up, exploits crafted, vulnerable employees identified, custom social engineering schemes plotted for these target employees, malicious files created, etc.
 
Recently, US airport workers from over 75 airports were targeted via malicious emails based on information such as their names, titles, and email addresses that were harvested via publicly-available documents [1].

Fig.1 shows how a simple search engine query can divulge information like emails exchanged between personnel in public forums which may seem innocuous, but can be used to launch a spear phishing attack. Popular mailing lists mask this sensitive information to avoid it from being scraped and abused by bots. Valid users on the other hand are allowed access after solving a simple CAPTCHA.

Fig.1: Search result revealing email addresses and other information about employees of an organization.

Prevention/Detection

Most of the intelligence collected by the assailants during this stage is publicly available and in general doesn’t involve the attackers touching any of the internal systems. Information that was gathered from previous APT campaigns but applicable to the current one could also be reused. This makes detecting an APT during these early stages of the attack challenging.

Usual best security practices such as conducting periodic penetration tests, hardening the applications & the operating systems, etc. are still relevant, but these measures by themselves don’t stand a chance against this adversary.

Organizations should take care to both restrict the amount of information that is flowing outside and be aware of publicly available sensitive information which could potentially be used against them.

Profile Scraper

Automated bots can be used to collect publicly available information on the company, the employees, etc. from popular social networking sites and search engines, etc. The data collected can automatically be analyzed for potential sensitive leaks.

Honey Profiles

Fake profiles at different organizational levels meant to be trip wires can be set up on popular social networking sites and connection attempts and profile hits can be analyzed. This could allow organizations to both recognize if they are being targeted and predict which individual or group of individuals are being targeted.

Click here to read the second part of this blog.

References:
1] http://www.seculert.com/blog/2014/07/extended-apt-campaign-targeted-us-airports.html

Images courtesy of google.com

Lokesh Kumar
Manager, K7 Threat Control Lab

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:
http://blog.k7computing.com/feed/

“I’m not a robot”, Google to reCAPTCHA the Flag

Friday, December 12th, 2014

Over the years, online users have had to identify obscure images, typically worn-out text from old newspapers or street addresses, and type the contents into a box to prove their humanness. CAPTCHA (an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”), as this process is called, helped prevent robots gain illegal access to websites, in order to propagate spam (unsolicited messages), for example.

However, these days advanced Artificial Intelligence technology with image recognition can solve CAPTCHA puzzles with astonishing accuracy, a whopping 99.8% according to Google. In an attempt to beat these more advanced bots, Google has recently launched a new API (Application Program Interface) called CAPTCHA reCAPTCHA.

With CAPTCHA reCAPTCHA , users are now directly asked to check a box as shown above. If this step is still insufficient to confirm the user’s humanness, a CAPTCHA is thrown. This CAPTCHA asks the users to match a given image with a set of images, usually animals or birds. Though this approach appears simple, Google claims that advanced risk analysis runs on the backend which monitors the user’s interaction with the CAPTCHA till the very end. This is a welcome change, especially for mobile users who face mild inconvenience in resolving the distorted images.

We hope CAPTCHA reCAPTCHA will be more effective in the fight against the bots created by cyber criminals.

Images courtesy of:

xpda.com
imgur.com

Archana, Content Writer

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:

http://blog.k7computing.com/feed/

SOCK! BASH!! SLAP!! PINCH! Battling Vulnerability Fatigue!

Wednesday, October 15th, 2014

Whilst the ghost of Shellshock still haunts everybody two diametrically opposite vulnerabilities have made the headlines over the past 24 hours or thereabouts:

  1. CVE-2014-4114, a remote code execution vulnerability in the Microsoft OS’s rendering of certain OLE objects, actively exploited in the wild, allegedly by Russian threat actors
  2. CVE-2014-3566, effectively a data leak vulnerability in SSL 3.0 for which a PoC attack to steal secure session cookies has been described by the discoverers of the vulnerability at Google

Let’s discuss CVE-2014-4114 first since its impact is more severe given the remote code execution aspect and the evidence of malicious exploitation in the wild. The good news is that Microsoft has issued the patch for this vulnerability as of yesterday. As members of the Microsoft Active Protections Program (MAPP), we at K7 have also received more information about how the vulnerability can be exploited. We have already secured protection against known bad exploit files, and a heuristic fix is ready, but as an additional paranoid step, if you have the K7 product with firewall installed, it should be possible to add a carefully-configured firewall rule for Microsoft Office OLE rendering applications, e.g. POWERPNT.EXE, EXCEL.EXE and WINWORD.EXE, to prevent them from accessing remote network locations, thus mitigating against the silent download and rendering of malicious files.

Now then, CVE-2014-3566; the Google PoC describes a Man-in-the-Middle attack which can be used to steal a supposedly secure session cookie (but this can be any encrypted data) IF the encryption channel is SSL 3.0 based. Serious as this sounds, CVE-2014-3566 is not as potent as the bash vulnerability suite, and not as valuable as Heartbleed in the grand scheme of things. The reasons for this is that there are several mitigating factors:

  1. The communication has to be via SSL 3.0 which is an antiquated, discredited protocol long since replaced by the more secure TLS. Of course client-side browsers may be duped into believing that the server supports only SSL 3.0, and therefore switch to this protocol
  2. The attacker has to insert himself/herself between the client and the server in order to control the format of the traffic and derive the tasty data byte-by-byte
  3. The encrypted traffic itself, separated into blocks, needs to lend itself to the attack in the sense that certain content deemed interesting to the attacker must be at deterministic locations in the encrypted blocks, with a rinse and repeat function as part of the modus operandi.

At the recently-concluded Virus Bulletin 2014 conference, at which we were Shellshocked for the first time, the managing of vulnerability disclosures was extensively discussed. The above couple of vulnerability disclosures have been suitably managed, minimising the impact on the general public.

Samir Mody
Senior Manager, K7TCL

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:

http://blog.k7computing.com/feed

https://icann-deal.with.it (Part 3)

Thursday, October 9th, 2014

This is the final part of a three-part blog based on my paper for AVAR 2012 that discusses the security challenges involved in adopting two relatively new technologies, namely, Internet Protocol Version 6 and Internationalized Domain Names.

Continuing from the second part of my paper..

Social Engineering. Malware authors/Spammers/Phishers who now have a larger character set to play with are likely to register domains resembling an original site to trick users into divulging information.

Fig.10 below shows the domain information for baidu.com and an IDN equivalent. Considering that the name servers, the e-mail address used to register the domain, etc, do not match, even security savvy users are likely to find it tricky to validate a URL from such IDNs before visiting it.

Fig.10: whois information on the original baidu.com and the squatted IDN version

Thanks to social networking sites like Facebook, twitter etc., which enable instant sharing of information among millions of users from different backgrounds, uncommon URLs could invoke a click from curious users even if they don’t recognise the character set. Malware campaigns such as these, though short lived, could still cause enough damage globally.

Fig.11: Representative example of an attack based on socially engineered IDNs

Matching Incongruence

URL scanners could focus more on consistency or the lack thereof while dealing with phishing and malware related URLs arriving from IDNs. Language mismatch between the message body of the e-mail and the URL, or the URL and the contents of the page that the URL points to, can be deemed suspicious.

Restrictions may be imposed on visiting IDNs which don’t match a user-defined list of allowed languages. Similarly, domains created by combining visually similar characters from different character sets can also be curbed. Popularly known as a Homograph attack, most common browsers already defend users against such threats. While this protection is only limited to within the browser, it can be extended to protect e-mail, social networking and other layers as well [12].

Fig.12 below shows two domains, one created entirely using the Latin character set and the other using a combination of Latin and Cyrillic character sets. Though both domains visually appear to be similar, their Puny Code representation proves otherwise.

Fig.12: Example of two visually similar domains and their Puny Code representation [13]

Security vendors could also continue existing practices of assigning a poor reputation to domains that originate from certain high-risk countries. Such domains are usually created due to nonexistent or inadequate cyber laws in the host country, which result in malware authors abusing them. Reputation can also be assigned to registrars of IDNs based on their commitment to handling abuse reports, enforcement and verification of registrant details, ease of registering domains in bulk, etc.

A solution to address the e-mail spam problem could involve creating a white list of registered mail servers. The Ipv6whitelist.eu project, for example, works on the assumption that all computers send out spam, unless they have been previously registered on the white list [14]. In addition, since there are few mail servers catering to a significantly large user base, one could argue that e-mail could continue using IPv4, which could breath new life into the practice of IP blacklisting, at least for e-mail spam.

There is a Certainty in Uncertainty

The implications of the transition from IPv4 to IPv6, and the introduction of IDNs, are bound to be of major significance to the Internet infrastructure. These changes engender the continuous growth of the Internet by accommodating an increasing number of inter-connected devices, and variegated foreign languages.

As with any change, given the absence of a crystal ball, the move to these new technologies involves risk.Without doubt spammers, phishers and malware authors, seeking to make a quick buck, will exploit the larger attack surface provided by a vastly increased IP address space and language diversity via IDNs. We in the AV industry must take cognizance of this to determine the security implications and forge robust solutions.

As discussed in this paper, the new technologies will put pressure on current methods to counter spam, phishing and malicious URLs, especially where reputation is of prime importance. Fortunately, AV vendors have generally been able to adapt to the regular inflow of new issues, with new responses for these constantly on the anvil.

The changes about to be witnessed and the solutions proposed are likely to have security companies relying heavily on aggressive heuristics and policy-based restrictions, which could increase the number of false positives. However in corporate environments, rules can be configured to suit the risk appetite of the user in question.

Things are about to get a whole lot more difficult. However, greater vigilance, user education, and as ever, timely security industry data sharing, will help in controlling the fallout. The challenge is indeed a major one, but it is certainly not insurmountable. we.can.deal.with.it

References:
[12] http://en.wikipedia.org/wiki/IDN_homograph_attack#Defending_against_the_attack
[13] Information on http://en.wikipedia.org/wiki/IDN_homograph_attack
[14] Information on http://www.ipv6whitelist.eu

Lokesh Kumar
K7 Threat Control Lab

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:
http://blog.k7computing.com/feed/

Keep e-Phishing at Bay

Friday, September 19th, 2014

A thus far undisclosed, potentially serious security flaw has been discovered on eBay according to BBC News. Hackers were apparently successful in exploiting a weakness on eBay’s website that enabled them to multi-redirect customers, via a landing page listing iPhones, to phishing pages purporting to be those of eBay so as to steal their login credentials.

Unfortunately is it likely that several users would have been duped into surrendering their credentials, thus handing over control of their accounts to the bad guys. However, K7 users would have been protected since one of the redirector URLs was blocked by the malicious URL-blocking feature which has the overall effect of nullifying the multi-step redirector chain and protecting users.

From the user’s side it’s difficult to differentiate between legit redirection and non-legit redirection so this is best left to the site blockers in internet security products such as K7 Total Security.

In addition to that we also found directory listing and outdated plugins (such as JWplayer) on the destination website to which users were being redirected. Based on website fingerprinting, it seems websites hosting the phishing pages were almost certainly compromised by the attackers to hide their tracks.

The phishing pages have now been removed, but the domains are still live and we aren’t sure whether the core vulnerability which allowed the hackers in in the first place has been patched. In other words the webserver may be vulnerable to being hacked once more.

At the time of writing this blog we are unsure whether the cross-site scripting (XSS) flaw exists in other eBay item listings which may or may not be currently in the process of being maliciously exploited. Given the popularity of a site such as eBay, the impact of such an attack can be far reaching and varied; it is possible to leverage redirections to deliver malware via drive-by-download attacks.

The question which pops up is, “Was this just a phishing attack ??” It could have been much much more damaging.

Image courtesy of mashable.com.

Priyal Viroja, Vulnerability Researcher, K7TCL

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:

http://blog.k7computing.com/feed

Quick Fixes for a Safer Online Banking Experience

Monday, September 15th, 2014

Recently, a researcher colleague at K7 Threat Control Lab faced a minor glitch in accessing his online banking account at one of India’s leading banks. This led him to explore the bank’s online banking website, and he was surprised to find that not only was the main logging information portal vulnerable to simple exploitation but the authentication process also seemed weak in certain areas.

Driven by curiosity, we experimented with the entry level data validation mechanism at the online banking websites of major banks in India to discover if their online banking services are as sound as they claim them to be. Our very basic, high-level “field trials” made us realize that both the bank’s online security methods and user practices could potentially compromise the security of the bank’s online services.

We observed a few simple logic flaws in the online security process which could present loopholes for the bad guys to exploit, thus potentially bruising the bank’s online defences. Note: These logic flaws do not involve the exploit of web application vulnerabilities such as XSS, SQL, RCE, etc.

Field Value Enumeration

A customer trying to access his account is required to submit a login form to confirm his authenticity. We noticed that most of the banking sites validated each entry of the login credentials separately. This kind of independent validation could lead to ‘Field Value Enumeration’ and could subsequently lead to attackers deliberately locking out user accounts. For example, if the account policy of a bank holds that users will be locked out after five failed login attempts, an attacker could lockout an account by deliberately sending an invalid password on five attempts for a valid username. On a large scale, mass account lockouts could amount  to a ‘Denial of Service’ attack, which, if successful, would harm the reputation of the targeted banking institution.

Weak Usernames

Nearly 50% of the internet banking portals have a feeble username-strength validation process. Usernames should be unique, and ideally not be enumerable or guessable, and should never be a “Bank Client ID”, “Bank Customer ID”, “Email ID”. By setting username standards by including alphanumeric and special characters, the strength of usernames can be improved, thus making it that much more challenging for the miscreants to abuse.

Easy-to-Remember Passwords

The password is usually the critical barrier which blocks malicious intruders at entry. However, customers generally opt for passwords which are simple and easy to remember, which makes the hacker’s job a tad easier. For a sturdy password, it should be made mandatory for users to employ criteria such as uppercase, lowercase, numbers and symbols, and minimum length in their passwords as a precaution against brute-force and dictionary attacks.

Additional validation from server side

User validations are mostly coded on client side scripting languages, and are therefore easily circumvented. Additional duplicate user validation processes should ideally be implemented at the server end as well to enhance the overall user validation process.

No CAPTCHA

Almost 60% of the online banking websites lack CAPTCHA implementations. Incorporating a CAPTCHA as an additional step in the user authentication process can significantly mitigate against bots and brute-force attacks.

Mail Notification for “Authentication”

Almost all online banking services have a mail delivery process for each user transaction that occurs. However, we noticed that 60% of net banking services are not sending mail notifications on unsuccessful authentication. Such a notification can be useful for users to be apprised of any unauthorized login attempt. There is unlikely to be a bombarding of the user’s inbox with notifications given that the probability of a legitimate user repeatedly typing in the wrong username and/or password is pretty low.

In conclusion a more secure online banking service can exist by employing enhanced protection strategies and by encouraging customers to adopt good security practices for usernames and passwords, thereby protecting their medium of access to these online banking websites.

Image courtesy of halomedia.co.za.

Priyal Viroja & Archana Sangili, K7 Team

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:

http://blog.k7computing.com/feed

Drive by and you’ll be taken for a ride

Tuesday, September 9th, 2014

Recently we came across a commercial website catering to cycling enthusiasts that appears to be compromised.

The site’s java-scripts are all injected with a malicious iframe strategically placed between blocks of seemingly innocent HTML content. This is an age old technique meant to trick web masters who tend to look for malicious code either at the beginning or at the end of an HTML file.

On visiting the site, your browser loads all the java-scripts for the page which then redirects you to a malicious URL displayed in the screen shot above. This redirected site has just a few lines of HTML  like below:

You’ll immediately be redirected to another URL that looks to be generated using a Domain Generation Algorithm (DGA). This third level of redirection will then lead you to the actual exploit code, which on successful exploitation will drop a malicious payload named “wiupdat.exe” thus completing the cycle of the classic drive-by download attack.

On further analysis of the executable, we realized that the malware pretends to be from K7 Computing by imitating our version strings like below:

This is done to gain the user’s trust who may choose to ignore the executable thinking that it belongs to a reputed security vendor. K7 users will be protected from this malicious file, the compromised website, and the intermediary URLs.

Imitations are flattering!!!

Melhin Ahammad
K7 Threat Control Lab

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:
http://blog.k7computing.com/feed/

https://icann-deal.with.it (Part 2)

Thursday, September 4th, 2014

This is the second part of a three-part blog based on my paper for AVAR 2012 that discusses the security challenges involved in adopting two relatively new technologies, namely, Internet Protocol Version 6 and Internationalized Domain Names.

Continuing from the first part of my paper…

Internet Metamorphosis

The Internet is witnessing a critical phase in the transition from an old technology to a new one, and users must understand the security implications involved. These implications could manifest themselves either during the implementation stage or after.

Tunnel Vision. IP tunnelling implementation involves encapsulating the IPv6 packets into IPv4, which is similar to creating a Virtual Private Network (VPN). Teredo, for example, is a tunnelling protocol that is installed by default on Windows Vista and Windows 7 operating systems, and provides IPv6 connectivity to a native IPv4 device [7].

Fig.4: Example of tunnelled IPv6 traffic[8]

Since the IPv6 contents are disguised inside the IPv4 packets, most security devices struggle to analyse and detect them. This in turn opens the door for attacks when these tunnels are used to transport malware.

There have been known instances of malware which enable IPv6 on a compromised host to communicate with its creator using these IP tunnels. The fact that IPv6 is enabled by default on most new operating systems makes it easier for malware to spread without being noticed. The infamous Zeus, for example, is known to support IPv6 from early 2010 onwards. This malware not only boasts of having the capability to sniff IPv6 traffic, but also supports an IPv6 Peer-to-Peer network [9].

Stack ’em Up. Dual Stack Implementation involves running both IPv4 and IPv6 in parallel, with one protocol taking preference over the other. Communication is done using the preferred protocol first, failing which it is retried using the secondary protocol.

Fig.5: Example of dual stack traffic[8]

Considering that communications happen natively either in IPv4 or in IPv6, and that both protocols co-exist in the network, until sufficient machines become IPv6 compliant, at which point IPv4 can be pensioned off, this is the preferred method of transition.

To NAT or Not. Network Address Translation (NAT) is a technique that allows multiple devices within an internal network to get online by sharing a single public IP address. This public IP address would be provided to a router at the gateway level, which in turn directs traffic to machines inside the network that use non-routable IP addresses.

On a small scale, NAT is used within a Small Office Home Office (SOHO) environment, and on a large scale, often referred to as Carrier Grade NAT (CGN), it is used by ISPs who have a limited number of IPv4 addresses.

Fig.6: Simple implementation of NAT within a SOHO environment

Apart from cutting down on the number of routable IPv4 addresses used, this technology also provided a certain degree of privacy and security to the users in the internal network. Automated port scans and information gathering attempts are deterred at the gateway, and would only succeed from inside the private network.

The gargantuan number of addresses available in IPv6 means that ISPs could technically do away with NAT, and assign a static IP address to each of its users, and yet never run out of addresses in the foreseeable future.

While this would promote end to end connectivity, which was how the Internet was originally envisaged, it could also open up the flood gates of machines which were never previously directly connected to the Internet, for now they would be vulnerable to prying eyes and groping hands.

The silver lining, however, is that since an IPv6 address can now be mapped to each user, tracking down malicious traffic & the victims of a malware incident also becomes easier. It could be a boon or a bane, depending on how one perceives it.

The Whois Who of Malware URLs , Phishing & Spam

Over the years as communication media within the Internet expanded from e-mails to other forms such as instant messaging, forums, blogging, social networking, etc., spammers followed suit with campaigns targeting these channels. These campaigns include the relatively innocuous comment spam posted in blogs/forums, Pump ’n Dump scams, attempts to sell Viagra and the like, phishers vying for sensitive user information, and malware related spam which go for the jugular.

The current volume of spam received via various communication channels is kept to a minimum thanks to a combination of techniques which involves, but is not limited to, content based and list based filtering. Given the plethora of malware URLs and spam messages disseminated everyday, most of this filtering is done using automated systems.

Fig.7 below shows a steady rise in the number of malware/phishing URLs for the first half of the year 2012

Fig.7: Number of malicious URLs crawled by K7 from January 2012 to June 2012 [10]

Content Based Filtering. This works on analyzing different characteristics of a message or a URL. For example, messages with keywords such as Viagra, Rolex, etc, somewhere in the MIME envelope could automatically be declared as spam. Similarly, a URL with words like PayPal or Facebook in the sub-domain component, combined with a recently registered domain name having a minimum validity can be deemed suspicious. However, when these keywords are represented in another language, automated content based filtering could become more challenging since we would now have to recognise the representation of a keyword in as many different character sets or Puny Code equivalents, as possible.

List Based Filtering. This aims to assign a reputation to the source of the e-mail message or the URL. For example, when a stream of messages detected as spam originates from a single IP address, that address may then be assigned a bad reputation, and would go into a blacklist. Similarly, a malicious domain or IP could go into this list.

Subsequent messages from a blacklisted IP address would automatically be labeled as spam & dropped when e-mail servers query the blacklist in real time. Likewise, URLs containing blacklisted domains or IP addresses would also be blocked as malicious.

Fig.8: One blacklisted IP address used to both send spam and host malware [10]

Once a domain/IP address gets blacklisted, the attacker shifts to a new address from which to send the spam or on which to host malware until that gets blacklisted too. They do this by either releasing and renewing their IP from their service provider, if the machine used to send the spam or host the malware is physically owned and controlled by them, or by selecting a new bot, a machine from their botnet consisting of many infected machines, from which to send the spam vicariously or to host malware on the attacker’s behalf.

On an IPv4 network the attacker has a theoretical maximum of only 4 billion addresses to cycle through. This number increases manifold within an IPv6 network. The increase in the number of domain names, due to the introduction of IDNs, is also likely to add to the blacklist woes, especially when these domains originate from an IPv6 network.

Fig.9 below shows the steady rise in the number of IDNs in the first half of the year 2012. Though currently small, the numbers are expected to increase significantly over time.

Fig.9: Number of malicious IDNs crawled by K7 from January 2012 to June 2012 [10]

Another problem with respect to blacklists is the amount of disk space occupied by these lists and the time taken to look them up. Even in the case of the relatively impoverished IPv4, assuming that all 4 billion addresses get blacklisted, a flat CSV file containing all these addresses occupies a minimum of approximately 60 Gigabytes of disk space on a Unix platform [11]. Consider further the amount of time taken in creating, maintaining, and querying such a big database in real time. Such a system would be nigh on unworkable for IPv6.

Click here to read the third part of this blog.

References:
[7] Information on http://www.us-cert.gov/reading_room/IPv6Malware-Tunneling.pdf
[8] Information on http://www.cybertelecom.org/dns/ipv6_transition.htm
[9] https://blog.damballa.com/archives/438
[10] Internal data
[11] http://www.circleid.com/posts/digging_through_the_problem_of_ipv6_and_email_part_1

Lokesh Kumar
K7 Threat Control Lab

If you wish to subscribe to our blog, please add the URL provided below to your blog reader:
http://blog.k7computing.com/feed/