Saturday, August 1, 2009

eScan Internet Security Suite description

Useful against threat of viruses, effective anti spyware & blocks Spam

eScan Internet Security Suite is a security suite which provides protection against viruses, hackers, objectionable contentand and privacy threats.

eScan scans the local and network drives for viruses and cleans them on a real-time basis. Built on MWL technology, it has a powerful Anti-Virus, intelligent Heuristic Scan Engine that detects and blocks over 90,000 known and unknown viruses. eScan is the latest offering from MicroWorld Technologies Inc.

It offers a combination of features to help you fight the threat of viruses, set security policies for parental control over content accessed by your child. It guards against Internet misuse, block Spam and offensive mails and block PopUp Ads.

A very important feature is Browser CleanUp that allows you to protect your privacy and removes traces of websites you visit, removes cookies, active X controls, plugins and other links that reveal your browsing habits.

Here are some key features of "eScan Internet Security Suite":

Powerful AntiVirus:
· The AntiVirus monitor of eScan constantly keeps vigil on every file inside the computer as well as files trying to enter the system via email, removable disks, web downloads and software vulnerabilities. Its large malware database contains all viruses appeared so far and their variants, while it's also updated continuously against new and emerging threats.

AntiSpam and Content Scanning:
· The Spam Control module of eScan uses a combination of technologies like email Header Tests, X-Spam Rules Check, Sender Policy Framework (SPF), RBL check, SURBL Check and Non Intrusive Learning Patterns (NILP). You can specify certain words or phrases so that mails with such words either in the subject, header or body will be identified and subsequently quarantined or deleted.

Web Scan and Parental Control:
· eScan gives you highly advanced features for blocking non-business, offensive and pornographic content accessed by employees in organizations, based on words and phrases appearing in such websites. Sophisticated algorithms are employed to avoid any false positives in this process. The same feature is used for advanced Parental Control for safeguarding kids from sleazy content, the home user versions of eScan.

eScan Management Console:
· eScan Management Console functions as a centralized server that allows you to remote install the software, distribute updates and upgrades to all machines in your network and enforce Integrated Security Policies for the entire organization. In a large corporate network environment, you can significantly reduce the costs and the Internet traffic by setting up a centralized updating structure. With this feature the security of the entire network can be controlled and managed at one single point.

Remote Web Administration:
· With eScanRAD you can access the Management Console via another computer through a browser and perform management tasks. It also helps in remote technical support for the software. Operations like installation, uninstallation and updating can be conveniently managed through this feature, irrespective where the administrator is located

PopUp Filter:
· Pop-ups are quite annoying while browsing on the Internet. Many sites have a number windows opening as soon as you visit them, which disturbs your activities on the computer and strangles the bandwidth. eScan offers a complete solution to this problem by providing a comprehensive Pop-up blocker option.

New "MWAV (MicroWorld AntiVirus) Utility":
· MWAV is a Powerful anti-virus utility. With some new intelligent features added, it's all set to give you the best ever performance. Adwares. This module has the power for detection and disinfection of registry entries of Viruses, Spywares, Adwares and other malice, so that the system performance and stability can be improved drastically.

TCP Connections:
· TCP Connections is a network monitoring tool that examines TCP/IP activity on Windows computers. It lists all TCP and UDP endpoints on a PC, including the remote address (along with the Domain Name of the remote address) and state of TCP connections.

Web Washer for Privacy Protection:
· eScan Browser Cleanup feature provides an easy and automatic means to protect your privacy in such a way that you can erase details of sites and web pages you have accessed. It also provides features that allow you to remove tracks of your normal offline activity like opening closing files, delete operations, etc. Deleted files, Internet cache, history files, cookies, etc are removed permanently from your hard disks.

Multilanguage Support:
· eScan is available in multiple languages. You can choose the language while installing the software. The languages available are English, German, Finnish, French, Italian, Portuguese, Spanish, Polish, Chinese and Latin Spanish

Additional Features:
· eScan has many other features that offer optimal control over virus scanning and content security related activity. Some of them are: remote access file rights, where you can allow or bar specific files from being created or modified by remote users in your network; automatic download of Anti-Virus updates and auto mailing of notifications to users

Requirements:

· Pentium - II Processor and above
· 64-128 MB of Physical Ram
· 50 MB of free H/Dis

Limitations:

· 30 days trial
· Nag screen


ftp://ftp.microworldsystems.com/download/escan/es_iwne.exe





Thursday, July 30, 2009

Internet security

When a computer connects to a network and begins communicating with others, it is taking a risk. Internet security involves the protection of a computer's internet account and files from intrusion of an unknown user.[1] Basic security measures involve protection by well selected passwords, change of file permissions and back up of computer's data.

Security concerns are in some ways peripheral to normal business working, but serve to highlight just how important it is that business users feel confident when using IT systems. Security will probably always be high on the IT agenda simply because cyber criminals know that a successful attack is very profitable. This means they will always strive to find new ways to circumvent IT security, and users will consequently need to be continually vigilant. Whenever decisions need to be made about how to enhance a system, security will need to be held uppermost among its requirements.

Internet security professionals should be fluent in the four major aspects:


Anti-virus

Some apparently useful programs also contain features with hidden malicious intent. Such programs are known as Malware, Viruses, Trojans, Worms, Spyware and Bots.

  • Malware is the most general name for any malicious software designed for example to infiltrate, spy on or damage a computer or other programmable device or system of sufficient complexity, such as a home or office computer system, network, mobile phone, PDA, automated device or robot.
  • Viruses are programs which are able to replicate their structure or effect by integrating themselves or references to themselves, etc into existing files or structures on a penetrated computer. They usually also have a malicious or humorous payload designed to threaten or modify the actions or data of the host device or system without consent. For example by deleting, corrupting or otherwise hiding information from its owner.
  • Trojans(Trojan Horsesare programs which may pretend to do one thing, but in reality steal information, alter it or cause other problems on a such as a computer or programmable device / system. Trojans can be hard to detect
  • Spyware includes programs that surreptitiously monitor keystrokes, or other activity on a computer system and report that information to others without consent.
  • Worms are programs which are able to replicate themselves over a (possibly extensive) computer network, and also perform malicious acts that may ultimately affect a whole society / economy.
  • Bots are program which take over and use the resources of a computer system over a network without consent, and communicate those results to others who may control the Bots.

The above concepts overlap and they can obviously be combined. The terminology is evolving.

Antivirus programs and Internet security programs are useful in protecting a computer or programmable device / system from malware.

Such programs are used to detect and usually eliminate viruses. Anti-virus software can be purchased or downloaded via the internet. Care should be taken in selecting anti-virus software, as some programs are not as effective as others in finding and eliminating viruses or malware. Also, when downloading anti-virus software from the Internet, one should be cautious as some websites say they are providing protection from viruses with their software, but are really trying to install malware on your computer by disguising it as something else.

Anti-spyware

There are two major kinds of threats in relation to spyware:

Spyware collects and relays data from the compromised computer to a third-party.

Adware automatically plays, displays, or downloads advertisements. Some types of adware are also spyware and can be classified as privacy-invasive software. Adware often are integrated with other software.

yes, ALF

^I don't get it.

Email Security

An significant part of the Internet, E-mail encryption is an important subset of this topic.

Browser choice

Almost 70% of the browser market is occupied by Internet Explorer[1]. As a result, malware writers often exploit Internet Explorer. Often malware exploit ActiveX vulnerabilities. Internet Explorer market share is continuously dropping (as of 2009; see list of web browsers for statistics) as users switch to other browsers, most notably Firefox, Opera and Google Chrome.

Buffer overflow attacks

A buffer overflow is an attack that could be used by a hacker to get full system access through various methods. It is similar to "Brute Forcing" a computer in that it sends an immense attack to the victim computer until it cracks. Most internet security solutions today lack sufficient protection against these types of attacks.



by wikioedia

All down load Antivirus internet security Best free

Wednesday, July 29, 2009

Bing: Safety, Security, and Search

Last month Microsoft unveiled Bing, a new decision engine. What’s a decision engine? It’s a search engine with more. To find out how Bing helps you find what you need on the Web, you can try Bing or take a tour.

Because we focus on security and safety in this blog, we thought we’d dedicate the next few posts to tips on how to protect you and your family’s privacy when you use Bing.

Stay tuned for more.

Get the latest security updates for Internet Explorer and Visual Studio

Today Microsoft released 2 out-of-band security updates:

  • MS09-034 - addresses a vulnerability in Microsoft Internet Explorer (KB 972260)

  • MS09-035 - addresses a vulnerability in Microsoft Visual Studio (KB 969706)

Everyone should get the update for Internet Explorer. Developers should also get the update for Visual Studio.

For more information see Microsoft Security Advisory 973882, Microsoft Security Bulletins MS09-034 and MS09-035 Released, a post on the the Microsoft Security Response Center (MSRC) blog.

If you have automatic updating turned on, you may already have the updates. To learn how to turn on automatic updating for your operating system, see Update your PC automatically.

If you do not have automatic updating turned on, or to check whether you need the updates, go to Microsoft Update.

What is an out-of-band security update?

An out-of-band security update is an update that is released outside of the normal Microsoft security update schedule. Microsoft normally releases updates on the second Tuesday of the month.

Passive Network Analysis P4

Potential Uses of Passive Analysis

The power of passive analysis is that you really don’t have to do anything to collect data – it is being generated all the time during normal network operations. After all, if users and applications did not need data on servers, the network would not exist in the first place. Our networks are constantly sending data to and from hosts around the enterprise, and every one of those transactions is a potential data source for us to capture and analyze. The challenge for us as analysts is to figure out what information we want from that data:

  • Situational Awareness

    Passive analysis techniques can tell us a lot about our network and how it normally operates. Without a solid understanding of the enterprise, it is very difficult to develop effective security policy and countermeasures. For instance, if you don’t know what your address space is, what routes to the Internet are available, or the operating systems that comprise the network, how can you possibly assess whether a particular vulnerability affects your network security posture? Or whether you have deployed firewalls and IDS sensors to the right locations in the network?
  • Policy Enforcement

    Passive analysis can help identify illicit services and other user misbehavior on the network almost instantly. A simple network capture with Ethereal or any other sniffer will identify the presence of streaming media, peer-to-peer file sharing, gaming activity, and other unauthorized use of the network. The easiest way to do this using Ethereal is to filter on packets with a source IP internal to your network, then sort on the TCP or UDP port numbers. In most cases, you will see a common collection of services that are easily identifiable as benign. These tend to be TCP ports used by the operating systems in use on your network and commonly used services (DNS, FTP, HTTP, etc.). The key is to validate the sources you identify as authorized to serve that material, and to use an active measure to validate the results of your passive analysis. After all, there is no reason a Trojan cannot be bound to TCP 80 instead of some arbitrary ephemeral port.
  • Detecting Insider Threats

    Moving beyond policy enforcement, passive analysis has the potential to help identify compromises that were not detected at the perimeter. A good example might be the Wualess back door reported by Symantec [11, Backdoor.Wualess.C]. This threat opens a back door and attempts to contact an IRC server on TCP port 5202 on the domain dnz.3322.org using the channel “#Phantom”. There are three discrete criteria we can easily key on in order to detect this particular threat: the presence of TCP 5202, IRC protocol in general, and outbound connections to this unusual domain. If we have good situational awareness (knowing what kinds of traffic are permitted in our enterprise) this would be an easy threat to identify.
  • Incident Response

    Passive analysis is an invaluable tool during incident response operations. Attackers don’t compromise systems just to own them; they use them. In most cases automated malicious code follows this same principle. Monitoring the network passively during incident response operations allows real-time visibility into the scope of a compromise, provides clues as to other systems that may be affected, and can provide clues as to where the attack originated.
  • Indications and Warnings

    Many of us maintain “gray lists” of domains that tend to originate attacks. Some gray lists include domains or specific sites whose access violates corporate policy (Internet gaming, pornography, online auctions, etc.). Passively monitoring outbound connections using tools like dsniff can provide a potential indicator that an attack or misuse of the network has already occurred. Similarly, a p0f log entry that indicated a specific IP address changed operating system would be cause for concern – perhaps a user has dual-booted the host to conceal their activities or to facilitate some kind of attacker behavior. Windows to Linux shifts are particularly concerning for this reason.

These techniques remain cumbersome today, mostly because there are so few integrated tool suites that present the full range of passive analysis capabilities. Nonetheless, they have tremendous potential and are easily implemented in most small to mid-sized networks using open-source software. By knowing what our networks look like and what they are used for, we can develop that “Home Field Advantage” and steal a march on those attacking our systems.

About the Author

Stephen Barish is a Senior Director at MacAulay Brown, Inc., and has been a security researcher and practitioner since 1992. He holds a B.Sc. in Electrical Engineering and is a CISSP.

Passive Network Analysis P3

Thanks to the magic of TCP/IP fingerprinting, which works pretty much the same in passive mode as it does in active mode, we can also make some educated guesses about the operating system of the systems involved in the traffic capture. The technique works because different operating systems implement the TCP/IP stack slightly differently. Spitzner's "Know Your Enemy: Passive Fingerprinting" paper [10] (4 March 2002) discussed four parameters that seemed to vary consistently between operating systems: TTL, Window Size, DF, and TOS. Zalewski's p0f 2.0 expands on these, providing much more granular tests to identify operating systems passively (Figure 3).


Figure 3 – Sample p0f Signatures

Running p0f against the traffic we captured earlier identifies the Web server as a FreeBSD 6.x system, which is consistent with the operating system of the Web server.

This example demonstrates the basic principles in passive network analysis. We can use similar tools and techniques to characterize traffic statistics (the percentage of TCP, UDP, ARP, etc.), connection tracking, bandwidth used, the number and size of packets transmitted, etc.

Passive Network Analysis P2

This has some advantages over active scanning solutions. Passive techniques introduce zero traffic on the monitored network, which can be important in high-availability situations, or where the network is not resilient enough to handle large volumes of scanning traffic. Passive techniques do not generate IDS alerts or log entries in the hosts and servers on the monitored network, reducing the overall analytical burden. In some circumstances, passive techniques can actually identify the presence of firewalls, routers, and switches performing NAT, and potentially characterize the hosts behind them.

In spite of all the advantages associated with the technique, there are limitations as well. Passive analysis always requires the ability to insert a sensor somewhere in the monitored network, either in hardware or software. Sensors also have to be placed in a topological location that allows them to see the traffic of interest, a non-trivial task in modern switched enterprises. Finally, the toolkit for passive analysis is much less mature than traditional active techniques, forcing a higher level of effort on the analyst during sensor deployment, data fusion, and analysis.

To better explain, let's examine a sample capture using Ethereal [6]. You can accomplish the same thing using tcpdump, provided you know how to write good BPF expressions (Berkley Packet Filters). Ethereal provides a lot of tools to save time for the lazy man, including a more rich filtering language and the ability to rapidly sort collected packets on pretty much any data field. Of course, before we start collecting traffic and writing filter code, it's useful to have an idea of what we're looking for – specifically, services running inside our network. Fortunately, there are a couple of rules of thumb that make this a bit easier. First, most services are associated with an assigned port number. These "standard" ports are assigned by the IANA (Internet Assigned Numbers Authority) [7] and are incorporated in the RFCs for standard TCP/IP services and protocols. Less common or malicious source ports can be researched using the Internet Storm Center [8] provided by the SANS Institute [9]. Both also provide excellent reading material on network analysis in general, as well as a snapshot of what other network defenders are seeing on their networks.

Another rule of thumb in analyzing traffic helps us differentiate clients from servers. Generally speaking, servers issue packets on the port number associated with the service. So to find a Web server, you look for TCP Source Port 80 emanating from inside your network. To demonstrate this, I captured some traffic off my home network using Ethereal and sorted it based on TCP Source Port. Sure enough, TCP 80 was there, and when I validated it using simple banner grabbing, the service on 192.168.254.2 was in fact a Web server. Note that our sensor did not have to initiate traffic in order to make this deduction – the data was sniffed off the wire; I could have accomplished the same thing with nmap, but that would have required the introduction of traffic on the network. One other note of caution – not all TCP protocols obey this rule of thumb, so study your copy of TCP/IP Illustrated!


Figure 2 – Capture of Web Sessions Using Ethereal

Passive Network Analysis P1

In sports, it's pretty much accepted wisdom that home teams have the advantage; that's why teams with winning records on the road do so well in the playoffs. But for some reason we rarely think about "the home field advantage" when we look at defending our networks. After all, the best practice in architecting a secure network is a layered, defense-in-depth strategy. We use firewalls, DMZs, VPNs, and configure VLANs on our switches to control the flow of traffic into and through the perimeter, and use network and host-based IDS technology as sensors to alert us to intrusions.

These are all excellent security measures – and why they are considered "best practices" in the industry – but they all fall loosely into the same kind of protection that a castle did in the Middle Ages. While they act as barriers to deter and deny access to known, identifiable bad guys, they do very little to protect against unknown threats, or attackers that are already inside the enterprise, and they do little to help us understand our networks so we can better defend them. This is what playing the home field advantage is all about - knowing our networks better than our adversaries possibly can, and turning their techniques against them.

Paranoid? Or maybe just prudent...

Our objective is to find out as much as possible about our own networks. Ideally we could just stroll down and ask the IT folks for a detailed network topology, an identification of our address ranges and the commonly used ports and protocols on the network. It seems counter-intuitive, but smaller enterprises actually do better about tracing this kind of information than gigantic multinational companies, partially because there is less data to track, and also because security and IT tend to work better together in smaller organizations.

In fact, large companies have a real problem in this area, especially if their business model includes growth by acquisition of other companies. Sometimes the IT staff doesn't even know all the routes to the Internet, making it pretty tough to defend these amalgamated enterprises. This is especially common in organizations that grow through mergers and acquisition.

The first, most basic information, we need about our networks in order to defend them well is the network map. Traditionally, attackers and defenders use network mapping technologies such as nmap [1], which use a stimulus-response method to confirm the existence of a host (depending on the options used) to identify its operating system and open ports. This technique relies on non-RFC compliant responses to "odd" packets, and has been around a long time. (Fyodor provides a great paper [2] on the technique, and pretty much pioneered the field of active operating system identification.) Active network mapping is a very powerful technique, but it does have its limitations. It introduces a significant amount of traffic on the network, for one, and some of that traffic can cause problems for network applications. In some cases, nmap can cause operating system instability, although this has become less common in recent years. They also only provide a snapshot in time of the enterprise topology and composition. Also, active mapping tools generally have difficulties or limitations dealing with firewalls, NAT, and packet-filtering routers. Fortunately there are passive analysis techniques that generate similar results.

Passive Analysis Theory

Passive network analysis is much more than intrusion detection, although that is the form of it most commonly used. Passive techniques can map connections, identify ports and services in use in the network, and can even identify operating systems. Lance Spitzner of the Honeynet project [3] and Michael Zalewski [4] helped pioneer passive fingerprinting techniques that reliably identify operating systems from TCP/IP traces. Zalewski's p0f v 2.0.8 [5] is one of the best passive OS fingerprinting tools available, and is the one used in this article to demonstrate some of the capabilities of the technique.

The key to passive network analysis is understanding that it works almost the same as active mapping and OS fingerprinting. All passive techniques rely on a stimulus-response scenario; they just rely on someone else's stimulus and then collect the response (Figure 1).


Figure 1 – Active and Passive Network Analysis

In the active scenario, the target (A) responds to stimulus provided by our mapping engine, which is useful, but an artificial observation condition we created just to conduct the mapping exercise. In the passive scenario, the target (A) responds to stimuli resulting from normal use. In both cases we can see the ports and services involved, connection flow, timing information, and can make some educated guesses about our network's operating characteristics from the resulting data. But the passive technique allows something the active one does not: we can see the network from the perspective of the user and application behavior during normal operations.

Web attacks hit U.S., South Korean sites P2

Signs of the latest attack started appearing over the weekend, when five U.S. government sites were targeted. By Monday, reports indicated that CIOs of federal agencies were scrambling to head off the attacks.

Yet, the U.S. government has been typically closed-mouthed about the threat. And that is perhaps the biggest lesson to be learned from the attack, said Amit Yoran, CEO of security firm NetWitness and a former cyber official in the U.S. Department of Homeland Security.

"This is a good sampling of a large scale attack that has a lot of people's attention and a lot of people concerned," he said. "It has been going for several days now, and there has been a coordinated restriction of information from the government. And that causes all sorts of issues — people are misinformed and they are jumping to the wrong conclusions."

Sharing information on ongoing attacks has been a major problem in the relationship between private industry, which owns nearly 90 percent of the Internet's infrastructure, and government agencies. Law enforcement agencies typically request incident reports from companies, but in return, give little information about attacks or distribute general warnings months after an incident has occurred.

Streamlining information sharing is not on the Obama administration's list of near-term objectives included in the target="_blank">recently released Cyberspace Policy Review, but it did make the medium-term to-do list. The latest attack shows that the government needs to give a greater priority to disseminating information, Yoran said.

"If the response to this is, 'Shut up and don't say anything,' you can see what the reaction would be to a more silent issue that did not get the media attention this attack has gotten," he said.

If you have tips or insights on this topic, please contact SecurityFocus.

Web attacks hit U.S., South Korean sites P1

Robert Lemos, SecurityFocus 2009-07-08

A widespread distributed denial-of-service attack continued to inundate U.S. government and South Korean Web sites with network traffic on Wednesday, the fourth day of a quickly escalating attack whose targets suggest a connection to the tensions surrounding North Korea.

The attack appears to have begun on Saturday night, July 4, Pacific time, initially attacking five U.S. government Web sites, according to configuration files of the malicious software used for the attack and obtained by security firm SecureWorks. By Monday evening, the attack had expanded to 26 Web sites, including sites in South Korea and some U.S. commercial sites, said Joe Stewart, director of malicious threat research at SecureWorks.

Each time, computers compromised with the bot software, which appeared to share code with the infamous MyDoom family of viruses, were updated with a configuration file that listed the latest targets, Stewart said

In the latest file, distributed on Tuesday, "some of the U.S. sites were taken out and the South Korean sites were added in," he said. The update in the configuration file matched the timing of reported attacks on South Korean sites.

A South Korean blogger publicized his own list of 36 sites that he culled from the code, including banks, newspapers and government Web sites in both South Korea and the United States. Among the U.S. government Web sites were the Department of Homeland Security, the Federal Trade Commission, and the Treasury Department.

While media reports have focused on the targets of the attacks, Jose Nazario, manager of security research for Arbor Networks, stressed that the actual sophistication and power of the denial-of-service attacks were mediocre at best. Data collected in one case indicated an attack of 23 Mbps to 25 Mbps — not large by modern standards — while the bot software showed a lack of understanding of current packing techniques and significant reuse of code from other malware, especially from the MyDoom code base that can be found in certain forums online.

"The writer is not exactly the most talented programmer out there," Nazario said.

Another security professional agreed that the attacker appeared to be an amateur.

"This, in my opinion, is not a very sophisticated attack, and to me, that is disappointing, because these sites should not be collapsing from these attacks," said Michael Sutton, vice president of security research for Zscaler.

The attacks share characteristics of past packet storms that took down high-profile targets. In 2000, a massive denial-of-service attack took down major e-commerce sites, including Amazon.com, CNN.com and Yahoo. Two months after the attacks, a Canadian teenager known as Mafiaboy, was arrested and, the following year, received an eight-month sentence for the attacks. In 2006, gray-hat security firm Blue Security shuttered its business following an extended denial-of-service attack that took down the company's site, a blog service and its domain-name provider. No arrests resulted from an investigation into the attack, which appear to have been launched by spammers.

Linux Firewall-related /proc Entries

Most people, when creating a Linux firewall, concentrate soley on manipulating kernel network filters: the rulesets you create using userspace tools such as iptables (2.4 kernels,) ipchains (2.2 kernels,) or even ipfwadm (2.0 kernels).

However there are kernel variables -- independent of any kernel filtering rules -- that affect how the kernel handles network packets. This article will discuss these variables and the effect they have on the network security of your Linux host or firewall.

What is Linux's /proc directory?

There are many settings inside the Linux kernel that can vary from machine to machine. Traditionally, these were set at compile time, or sometimes were modifiable through oft-esoteric system calls. For example each machine has a host name which would be set at boot time using the sethostname(2) system call, while iptables reads and modifies your Netfilter rules using getsockopt(2) and setsockopt(2), respectively.

Modern Linux kernels have many settings that can be changed. Providing or overloading a plethora of system calls becomes unwieldy, and forcing administrators to write C code to change them at run time is a pain. Instead, the /proc filesystem was created.[1] /proc is a virtual filesystem -- it does not reside on any physical or remotely mounted disk -- that provides a view of the system configuration and runtime state.

The /proc filesystem can be navigated just like any filesystem. Entries all appear to be standard files, directories, and symlinks, but are actually views into the kernel information itself. Some of these can be modified by root, but most are read only. To view these files, cat and more are your friends:

 # cd /proc
# ls -l version
-r--r--r-- 1 root root 0 Jun 20 18:30 /proc/version
# cat version
Linux version 2.4.21 (guru@example.com) (gcc version 2.95.4 20011002) ...

Note that the kernel fudges a bit the ls output - these files will appear to have content when viewed, but will always have a length of 0 bytes. Rather than waste time figuring out how much output would be produced if the file were viewed, the kernel just reports 0 for most statistics, and gives the current time for all timestamps.

/proc/sys

All the /proc entries that can be modified live inside the /proc/sys directory. You can modify these in two different ways, using standard unix commands and via sysctl. The following examples show how you can set the hostname using both methods:

Changing /proc pseudo-files manually

 # ls -l /proc/sys/kernel/hostname
-r--r--r-- 1 root root 0 Jun 20 18:30 /proc/sys/kernel/hostname

# hostname
catinthehat

# cat /proc/sys/kernel/hostname
catinthehat

# echo 'redfishbluefish' > /proc/sys/kernel/hostname

# hostname
redfishbluefish

Changing /proc pseudo-files via sysctl

 # hostname
redfishbluefish

# sysctl kernel.hostname
kernel.hostname = redfishbluefish

# sysctl -w kernel.hostname=hop-on-pop
kernel.hostname = hop-on-pop

# hostname
hop-on-pop

Note that the main difference between these two methods is that sysctl uses dots[2] as a separator instead of slashes, and the initial proc.sys is assumed. sysctl can be run with a file as an argument, in which case all variable modifications in that file are performed:

 # hostname
hop-on-pop

# cat reset_hostname
kernel.hostname=butterbattlebook

# sysctl -p reset_hostname
; Set our hostname
kernel.hostname=butterbattlebook
;
; Turn on syncookies
net.ipv4.tcp_syncookies = 1

# hostname
butterbattlebook

If -p is used and no filename is provided, the file /etc/sysctl.conf will be read.

The changes you make to /proc variables affect only the currently running kernel - they will revert back to the compile-time defaults at the next reboot. If you wish your changes to be permanent, you can either create a startup script that sets variables to your liking, or you can create a /etc/sysctl.conf file. Most Linux distributions will run sysctl -p at some point during the normal bootup process.

Firewall-related /proc entries

While there are many different kernel variables you can tweak, this article will only discuss those specifically related to protecting your Linux machine from network attacks. Also, we'll restrict ourselves to the IPv4 version, rather than IPv6, since the latter inherits variables settings from the former where appropriate anyway.

If you're interested in learning about other kernel variables, read the proc(5) man page. There are also several files in the kernel source inside the Documentation directory that may provide more information, /usr/src/linux/Documentation/filesystems/proc.txt and /usr/src/linux/Documentation/networking/ip-sysctl.txt are good starting points.

Some kernel variables are integers, such as kernel.random.entropy_avail which contains the bytes of entropy available to the random number generator. Others are arbitrary strings, such as fs.inode-state which contains the number of allocated and free kernel inodes separated by spaces. However most of the firewall-related variables are simple binary values where of '1' means 'on' and '0' means off.

A Linux machine can have more than one interface, and you can set some variables on different interfaces independently. These are in the /proc/sys/net/ipv4/conf directory, which contains all the current interfaces available, such as lo, eth0, eth1, or wav0, and two other directories, all and default.

When you change variables in the /proc/sys/net/ipv4/conf/all directory, the variable for all interfaces and default will be changed as well. When you change variables in /proc/sys/net/ipv4/conf/default, all future interfaces will have the value you specify. This should only affect machines that can add interfaces at run time, such as laptops with PCMCIA cards, or machines that create new interfaces via VPNs or PPP, for example.

Proc files

Below are /proc settings that you can tweak to secure your network configuration. I've prepended each filename with either enable (1) or disable (0) to show you my suggested settings where applicable. You can actually use the following handy shell functions to set these in a startup script if you prefer:

 enable () { for file in $@; do echo 1 > $file; done }
disable () { for file in $@; do echo 0 > $file; done }

enable /proc/sys/net/ipv4/icmp_echo_ignore_all
When enabled, ignore all ICMP ECHO REQUEST (ping) packets. Does nothing to actually increase security, but can hide you from ping sweeps, which may prevent you from being port scanned. Nmap, for example, will not scan unpingable hosts unless -P0 is specified. This will prevent normal network connectivity tests, however.

enable /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
When enabled, ignore broadcast and multicast pings. It's a good idea to ignore these to prevent you from becoming an inadvertent participant in a distributed denial of service attack, such as Smurf.

disable /proc/sys/net/ipv4/conf/*/accept_source_route
When source routed packets are allowed, an attacker can forge the source IP address of connections by explicitly saying how a packet should be routed across the Internet. This could enable them to abuse trust relationships or get around TCP Wrapper-style access lists. There's no need for source routing on today's Internet.
enable /proc/sys/net/ipv4/conf/*/rp_filter
When enabled, if a packet comes in on one interface, but our response would go out a different interface, drop the packet. Unnecessary on hosts with only one interface, but remember, PPP and VPN connections usually have their own interface, so it's a good idea to enable it anyway. Can be a problem for routers on a network that has dynamically changing routes. However on firewall/routers that are the single connection between networks, this automatically provides spoofing protection without network ACLs.
disable /proc/sys/net/ipv4/conf/*/accept_redirects
When you send a packet destined to a remote machine you usually send it to a default router. If this machine sends an ICMP redirect, it lets you know that there is a different router to which you should address the packet for a better route, and your machine will send the packet there instead. A cracker can use ICMP redirects to trick you into sending your packets through a machine it controls to perform man-in-the-middle attacks. This should certainly never be enabled on a well configured router.
disable /proc/sys/net/ipv4/conf/*/secure_redirects
Honor ICMP redirects only when they come from a router that is currently set up as a default gateway. Should only be enabled if you have multiple routers on your network. If your network is fairly static and stable, it's better to leave this disabled.
disable /proc/sys/net/ipv4/conf/*/send_redirects
If you're a router and there are alternate routes of which you should inform your clients (you have multiple routers on your networks), you'll want to enable this. If you have a stable network where hosts already have the correct routes set up, this should not be necessary, and it's never needed for non-routing hosts.
disable /proc/sys/net/ipv4/ip_forward
If you're a router this needs to be enabled. This applies to VPN interfaces as well. If you do need to forward packets from one interface to another, make sure you have appropriate kernel ACLs set to allow only the traffic you want to forward.
(integer) /proc/sys/net/ipv4/ipfrag_high_thresh
The kernel needs to allocate memory to be able to reassemble fragmented packets. Once this limit is reached, the kernel will start discarding fragmented packets. Setting this too low or high can leave you vulnerable to a denial of service attack. While under an attack of many fragmented packets, a value too low will cause legitimate fragmented packets to be dropped, a value too high can cause excessive memory and CPU use to defragment attack packets.
(integer) /proc/sys/net/ipv4/ipfrag_low_thresh
Similar to ip_frag_high_thresh, the minimum amount of memory you want to allow for fragment reassembly.
(integer) /proc/sys/net/ipv4/ipfrag_time
The number of seconds the kernel should keep IP fragments before discarding them. Thirty seconds is usually a good time. Decrease this if attackers are forging fragments and you'll be better able to service legitimate connections.
enable /proc/sys/net/ip_always_defrag
Always defragment fragmented packets before passing them along through the firewall. Linux 2.4 and later kernels do not have this /proc entry, defragmentation is turned on by default.
(integer) /proc/sys/net/ipv4/tcp_max_orphans
The number of local sockets that are no longer attached to a process that will be maintained. These sockets are usually the result of failed network connections, such as the FIN-WAIT state where the remote end has not acknowledged the tear down of a TCP connection. After this limit has been reached, orphaned connections are removed from the kernel immediately. If your firewall is acting as a standard packet filter, this variable should not come into play, but it is helpful on connection endpoints such as Web servers. This variable is set at boot time to a value appropriate to the amount of memory on your system.

Other related variables that may be useful include tcp_retries1 (how many TCP retries we send before giving up), tcp_retries2 (how many TCP retries we send that are associated with an existing TCP connection before giving up), tcp_orphan_retries (how many retries to send for connections we've closed), tcp_fin_timeout (how long we'll maintain sockets in partially closed states before dropping them.) All of these parameters can be tweaked to fit the purpose of the machine, and are not purely security related.

(integer) /proc/sys/net/ipv4/icmp_ratelimit
(integer) /proc/sys/net/ipv4/icmp_ratemask
Together, these two variables allow you to limit how frequently specified ICMP packets are generated. icmp_ratelimit defines how many packets that match the icmp_ratemask per jiffie (a unit of time, a 1/100th of a second on most architectures) are allowed. The ratemask is a logical OR of all the ICMP codes you wish to rate limit. (See /usr/include/linux/icmp.h for the actual values.) The default mask includes destination unreachable, source quench, time exceeded and parameter problem. If you increase the limit, you can slow down or potentially confuse port scans, but you may inhibit legitimate network error indicators.
enable /proc/sys/net/ipv4/conf/*/log_martians
Have the kernel send syslog messages when packets are received with addresses that are illegal.
(integer) /proc/sys/net/ipv4/neigh/*/locktime
Reject ARP address changes if the existing entry is less than this many jiffies old. If an attacker on your LAN uses ARP poisoning to perform a man-in-the-middle attack, raising this variable can prevent ARP cache thrashing.
(integer) /proc/sys/net/ipv4/neigh/*/gc_stale_time
How often in seconds to clean out old ARP entries and make a new ARP request. Lower values will allow the server to more quickly adjust to a valid IP migration (good) or an ARP poisoning attack (bad).
disable /proc/sys/net/ipv4/conf/*/proxy_arp
Reply to ARP requests if we have a route to the host in question. This may be necessary in some firewall or VPN/router setups, but is generally a bad idea on hosts.
enable /proc/sys/net/ipv4/tcp_syncookies
A very popular denial of service attack involves a cracker sending many (possibly forged) SYN packets to your server, but never completing the TCP three way handshake. This quickly uses up slots in the kernel's half open queue, preventing legitimate connections from succeeding. Since a connection does not need to be completed, there need be no resources used on the attacking machine, so this is easy to perform and maintain.

If the tcp_syncookies variable is set (only available if your kernel was compiled with CONFIG_SYNCOOKIES) then the kernel handles TCP SYN packets normally until the queue is full, at which point the SYN cookie functionality kicks in.

SYN cookies work by not using a SYN queue at all. Instead the kernel will reply to any SYN packet with a SYN|ACK as normal, but it will present a specially-crafted TCP sequence number that encodes the source and destination IP address and port number and the time the packet was sent. An attacker performing the SYN flood would never have gotten this packet at all if they're spoofing, so they wouldn't respond. A legitimate connection attempt would send the third packet of the three way handshake which includes this sequence number, and the server can verify that it must be in response to a valid SYN cookie and allows the connection, even though there is no corresponding entry in the SYN queue.

Enabling SYN cookies is a very simple way to defeat SYN flood attacks while using only a bit more CPU time for the cookie creation and verification. Since the alternative is to reject all incoming connections, enabling SYN cookies is an obvious choice. For more information about the inner workings of SYN cookies, see http://cr.yp.to/syncookies.html

Summary

When creating a Linux firewall, or hardening a Linux host, there are many kernel variables that can be utilized to help secure the default networking stack. Coupled with more advanced rules, such as Netfilter (iptables) kernel ACLs, you can have a very secure machine with a minimum of fuss.


ARP spoofing HTTP infection malware

This year, we've seen many ARP spoofing viruses, also known as ARP cache-poisoning viruses. This type of malware comes in many variants and is widely spread in China. Recently, we uncovered an ARP spoofing virus that exhibits several new features.

The new ARP spoofing virus inserts a malicious URL into the session of an HTTP response, thus including significant malicious content, and then exploits Internet Explorer. At the same time, the virus makes a poisoned host act as an HTTP proxy server. When any machine in the same subnet with the poisoned machine accesses the Internet, the traffic goes through the poisoned machine.

Let's take a detailed look at the features of the latest ARP spoofing virus.

This type of virus replaces the MAC address of the Gateway machine with the MAC address of the poisoned machine. The following screen shows the correct Gateway MAC address:

When we run the ARP spoofing virus, the Gateway MAC address is changed, as shown in the following diagram. The real Gateway MAC address is changed by the poisoned machine to the MAC address of the poisoned machine. Please review the following diagram.

Now let's view a detailed virus analytic report

The following diagram shows the mechanism used by this type of virus. Normally, when we open a Web page, the traffic goes to the Gateway machine directly (see pathway 4). But if the local network is infected by an ARP spoofing virus, the traffic goes through the poisoned machine before it goes to the Gateway, as indicated by pathway 5 and pathway 6 below:

The following steps describe what occurs.

First step: The poisoned machine broadcasts ARP spoofing packets saying "I am the Gateway"

Second step: Each machine in the subnet receives an ARP spoofing packet and updates its ARP table, so the ARP cache is poisoned.

Third step: A machine accesses the Internet through the poisoned machine, then the poisoned machine routes this HTTP packet through the Gateway (the poisoned machine uses a Net driver, such as wpcap.dll or WanPacket.dll, to get network traffic).

Fourth step: The Gateway inserts a malicious URL into the HTTP response packet. Then it sends the malicious packet to the object machine.

In the following code, we see how the virus inserts a malicious link:

In the shown code above, we can see partial IP address information. The information comes from the author's network environment, which is similar to the following:

0000b3b0 255.255.255.0

subnet mask

0000b3c0 10.xx.xx.58

poisoned machine IP address

0000b840 10.xx.xx.1

correct Gateway address

0000b850 10.xx.xx.*

subnet information

When the virus obtains this data, it scans the local subnet and then sends ARP spoofing packets to machines in the local subnet.

Let's see how the virus implements these functions:

In the code above, the virus calls a system dll file (iphlpapi.dll) to get general information about the local network adapter. The iphlpapi.dll file is a module containing the functions used by the Windows IP Helper API. When the virus gets the local network adapter information, the virus can make spoofing ARP packet. The following graphic shows detailed code:

We used OllyDbg to trace the virus into the Windows system space, and we obtained the code above. When we introduced this virus here, we needed some background knowledge. The virus uses WinPcap to capture network traffic and insert malicious Web code into the HTTP response.

So what is WinPcap?

WinPcap is the industry-standard tool for link-layer network access in Windows environments. It allows applications to capture and transmit network packets, bypassing the protocol stack, and has additional useful features, including kernel-level packet filtering, a network statistics engine, and support for remote packet capture.

The ARP spoofing virus calls several functions from the wpcap.dll, as shown here:

(1) int pcap_loop()

Collect a group of packets.

(2) int pcap_sendpacket()

Send a raw packet.

(3) int pcap_setfilter()

Associate a filter to a capture.

(4) int pcap_compile()

Compile a packet filter, converting a high-level filtering expression into a program that can be interpreted by the kernel-level filtering engine.

For additional functional details about WinPcap, please see this Web page http://www.winpcap.org/docs/docs_40_2/html/group__wpcapfunc.html.

Note the following picture

The following code sample includes the malicious code:


If your local network has the ARP spoofing virus, and if you attempt to access any Web page, the ARP spoofing machine will send a malicious response. If the ARP spoofing virus is in a subnet of the WWW server group, any HTTP response from this subnet will be malicious. If the local network has an ARP spoofing virus, when you open any Web page, the Web page will look something like the following picture:

If an ARP spoofing virus poisons your network, you can use Ethereal to capture network traffic. If one IP address sends ARP broadcast packets continuously, then that IP address is suspicious. You can use the command "arp -a" to review which Gateway MAC address is being used. Confirm whether it is a real Gateway MAC address or not. If it is not a real Gateway MAC address, then you can be certain that you have an ARP spoofing virus in your network. You can use the false Gateway MAC address to find out which is the poisoned machine.

Websense Security customers are protected from such threats because we filter the injected malicious content from reaching the desktop, even if the ARP spoofing virus exists inside of your subnet.

Security Researcher: Kai Zhang

(websense.com)