Unified threat management (UTM) Is used to describe network firewalls that have many features in one box, including e-mail spam filtering, anti-virus capability, an intrusion detection (or prevention) system (IDS or IPS), and World Wide Web content filtering, along with the traditional activities of a firewall. These are application layer firewalls that use proxies to process and forward all incoming traffic, though they can still frequently work in a transparent mode that disguises this fact. However, if this uses too much processor time, the higher-level inspection can be disabled so that the firewall functions like a much simpler network address translation (NAT) gateway.
A firewall is a dedicated appliance, or software running on another computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules. A firewall is an integrated collection of security measures designed to prevent unauthorized electronic access to a networked computer system. It is also a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all computer traffic between different security domains based upon a set of rules and other criteria. A system designed to prevent unauthorized access to or from a private network. Firewalls can be implemented in both hardware and software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.
WAN optimization is one of the techniques to improve the speed of functional tools so as to maximize the business performance and to proactively solve the problems.
WAN optimization products seek to accelerate a broad range of applications accessed by distributed enterprise users via eliminating redundant transmissions, staging data in local caches, compressing and prioritizing data, and streamlining chatty protocols (e.g., CIFS).
Improving the application response time is the main benefit that is accrued out of WAN optimization and this is essential where there is centralization of servers and IT resources. When you optimize WAN, you are also saved from the cost of upgrading the band widths.
WAN Optimization is a superset of WAFS in that it also addresses:
• SSL-encrypted ASP and Intranet applications
• Multimedia e-learningapplications
Component techniques of WAN Optimization include WAFS, CIFS proxy, HTTPS Proxy, media multicasting, Web caching and bandwidth management
A few WAN/Internet Optimization techniques:
Compression – Relies on data patterns that can be represented more efficiently. Best suited for point to point leased lines.
Protocol spoofing – Bundles multiple requests from chatty applications into one. Best suited for Point to Point WAN links.
Traffic shaping – Controls data usage based on spotting specific patterns in the data and allowing or disallowing specific traffic. Best suited for both point to point leased lines and Internet connections.
Equalizing – Makes assumptions on what needs immediate priority based on the data usage. Excellent choice for wide open unregulated Internet connections and clogged VPN tunnels.
Connection Limits – Prevents access gridlock in routers and access points due to denial of service or peer to peer. Best suited for wide open Internet access links , can also be used on WAN links.
Proxy caching accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. These machines are built to deliver superb file system performance (often with RAID and journaling) and also contain hot-rodded versions of TCP.
Application Acceleration is nothing but a combination of techniques to automatically optimize the performance of an application over the network be it LAN or WAN. They can be described as appliances that “enable server/data center consolidation and deployment of browser-based application interfaces while lowering total cost of ownership.”
Most Application Acceleration products provide an application-independent foundation that automatically optimizes all applications that run over TCP. They also provide for application-specific optimizations for widely-used applications such as file sharing, email, databases, Web (including secure HTTPS) applications, and more.
By combining an application independent foundation with application specific modules, they provide a more powerful, flexible architecture that can deliver higher application acceleration and also deliver additional optimizations to customers for optimal application performance.
Web Security products allow organizations to secure Web traffic effectively while still enabling the latest Web-based tools and applications. These products analyze Web traffic in real-time, instantly categorizing new sites and dynamic content, proactively discovering security risks, and blocking dangerous malware. They also protect against spyware, malicious mobile code, phishing attacks, bots, keylogger backchannel communications from reaching host servers and other threats. They also help in filtering and controlling the content that can be viewed on a network.
Messaging Security products filter spam and viruses in the cloud, providing bandwidth and administrative resource savings. They usually reside at the gateway to provide granular outbound email protection, allowing for finely tuned policies and an optional second layer for inbound filtering.
A few benefits of using these products are listed below:
• Improved network and cost efficiency – Increase bandwidth and reduce administrative time while keeping email policy control within the network.
• Mitigate risk and realize ROI – Definitive levels of protection and visibility with drill-down, delegated policy and user-based reporting.
• Use a single trusted vendor – Consolidate Email Security through Websense and receive support from a single point of contact.
Data loss prevention is the keyword today in most organizations since it is the key to competitive advantage in today’s world. Securing data from internal threats is the utmost concern today for organizations. DLP is nothing but the use of various techniques to prevent critical data from unnecessarily leaving the organization.
DLP products can be defined as:
“Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use, through deep content analysis.”
Some of the most common techniques used in these products to detect and prevent unauthorized extrusion of data are:
1. Rule bases/ Regular expressions.
2. Database fingerprinting.
3. Exact File Matching.
4. Partial Document Matching.
5. Statistical Analysis.
7. Predefined Categorization.
Assessments are typically performed according to the following steps:
1. Cataloguing assets and capabilities (resources) in a system.
2. Assigning quantifiable value (or at least rank order) and importance to those resources.
3. Identifying the vulnerabilities or potential threats to each resource.
4. Mitigating or eliminating the most serious vulnerabilities for the most valuable resources.
Classical risk analysis is principally concerned with investigating the risks surrounding physical plant (or some other object), its design and operations. Such analyses tend to focus on causes and the direct consequences for the studied object. Vulnerability analyses, on the other hand, focus both on consequences for the object itself and on primary and secondary consequences for the surrounding environment. It also concerns itself with the possibilities of reducing such consequences and of improving the capacity to manage future incidents.
Asset management is the set of business practices that join financial, contractual and inventory functions to support life cycle management and strategic decision making for the IT environment. It is a process of tracking information about technology assets throughout the entire asset life cycle from initial procurement to retirement.
Assets include all elements of software and hardware that are found in the business environment.
Software Asset management
Software Asset Management applies to the business practices specific to software management, including software license management, configuration management , standardization of images and compliance to regulatory and legal restrictions—such as copyright law, Sarbanes Oxley and software publisher contractual compliance.
Software is referred to as entitlements so that SAM programs confirm the right to use or entitlement to that software by the user. Automation is used to facilitate this management.
Hardware asset management
Hardware asset management entails the management of the physical components of computers and computer networks, from acquisition through disposal. Common business practices include request and approval process, procurement management, life cycle management, redeployment and disposal management. A key component is capturing the financial information about the hardware life cycle which aids the organization in making business decisions based on meaningful and measurable financial objectives.
Role of IT asset management in an organization
The IT Asset Management function is the primary point of accountability for the life-cycle management of information technology assets throughout the organization. Included in this responsibility are development and maintenance of policies, standards, processes, systems and measurements that enable the organization to manage the IT Asset Portfolio with respect to risk, cost, control and IT Governance, compliance and business performance objectives as established by the business.
IT Asset Management is an integrated software solution that works with all departments that are involved in the procurement, deployment, management and expense reporting of IT assets.
• Uncover savings through process improvement and support for strategic decision making
• Gain control of the inventory
• Increase accountability to ensure compliance
Enterprise data privacy is also commonly referred to as Identity Management. Identity Management is referred to as Authentication. It is basically a set of rules and tools that are used in tandem to protect the enterprise’s network from unauthorized users and to prevent the misuse of the network. Authentication (from Greek αυθεντικός; real or genuine, from authentes; author) is the act of establishing or confirming something (or someone) as authentic, that is, that claims made by or about the thing are true. This might involve confirming the identity of a person or assuring that a computer program is a trusted one using certain authentication factors. An authentication factor is a piece of information and process used to authenticate or verify a person’s identity for security purposes. Transactional authentication generally refers to the Internet-based security method of securely identifying a user through two or three factor authentication at a transaction level, rather than at the traditional Session or Logon level.
Types of Factor Authentications:
1. Two Factor Authentication: Two-factor authentication is a security process in which the user provides two means of identification, one of which is typically a physical token, such as a card, and the other of which is typically something memorized, such as a security code. In this context, the two factors involved are sometimes spoken of as something you have and something you know. A common example of two-factor authentication is a bank card: the card itself is the physical item and the personal identification number (PIN) is the data that goes with it.
2. Three Factor Authentication: is a security process in which the user has to provide the following three means of identification:
• Automatic remediation process i.e fixing non-compliant nodes before allowing access.
• Something the user has (e.g., ID card, security token, software token)
• Something the user knows (e.g., a password, pass phrase, or personal identification number (PIN))
• Something the user is or does (e.g., fingerprint or retinal pattern, DNA sequence, signature or voice recognition, unique bio-electric signals, or any other biometric identifier)
Tokens, Biometrics, Phones,Smart cards, OTP Token are a few examples of factors that could be used as SOMETHING THE USER HAS.
Advantages Of using 2/3 Factor Authentication:
1. Drastically reduce the incidence of onlineIdentity Thefts , phishing expeditions and other online frauds.
2. Ensures that you have a very strong authentication method in place.
3. Increases the confidence and trust levels of the users interacting with your network.
4. Adheres to the compliance rules of various standards especially if you are in the financial domain.
5. Ensures that you have sufficient levels of security to thwart any attacks on your network.
6. It allows you to provide secure remote access to your network.
Network Access Control & LAN Security
Network Access Control is a set of protocols used to define how to secure the network nodes prior to the nodes accessing the network. It is also an approach to computer network security that attempts to unify endpoint security technology (such as antivirus, host intrusion prevention, and vulnerability assessment), user or system authentication and network security enforcement.
Network Access Control (NAC) aims to do exactly what the name implies: control access to a network with policies, including pre-admission endpoint security policy checks and post-admission controls over where users and devices can go on a network and what they can do.
Benefits of Network Access Control
• Automatic remediation process i.e fixing non-compliant nodes before allowing access.
• Allowing the seamless integration of network infrastructure such as routers, switches, back office servers and end user computing equipment to ensure the information system is operating securely before interoperability is allowed.
• Mitigation of zero-day attacks The key value proposition of NAC solutions is the ability to prevent end-stations that lack antivirus, patches, or host intrusion prevention software from accessing the network and placing other computers at risk of cross-contamination of network worms.
• Policy enforcement NAC solutions allow network operators to define policies, such as the types of computers or roles of users allowed to access areas of the network, and enforce them in switches, routers, and network middleboxes.
• Identity and access management Where conventional IP networks enforce access policies in terms of IP addresses, NAC environments attempt to do so based on authenticated user identities, at least for user end-stations such as laptops and desktop computers.
An Intrusion Prevention System is a network security device that monitors network and/or system activities for malicious or unwanted behaviour and can react, in real-time, to block or prevent those activities. IPS can make access control decisions based on application content, rather than IP address or ports as traditional firewalls had done. However, in order to improve performance and accuracy of classification mapping, most IPS use destination port in their signature format. As IPS systems were originally a literal extension of intrusion detection systems, they continue to be related
Intrusion prevention systems may also serve secondarily at the host level to deny potentially malicious activity. There are advantages and disadvantages to host-based IPS compared with network-based IPS. In many cases, the technologies are thought to be complementary.
An Intrusion detection system (IDS) is software and/or hardware designed to detect unwanted attempts at accessing, manipulating, and/or disabling of computer systems, mainly through a network, such as the Internet. These attempts may take the form of attacks, as examples, by crackers, malware and/or disgruntled employees. An IDS cannot directly detect attacks within properly encrypted traffic.
An intrusion detection system is used to detect several types of malicious behaviours that can compromise the security and trust of a computer system. This includes network attacks against vulnerable services, data driven attacks on applications, host based attacks such as privilege escalation, unauthorized logins and access to sensitive
Types of Intrusion-Detection systems
• A network intrusion detection system (NIDS).
• A protocol-based intrusion detection system
• An application protocol-based intrusion detection system (APIDS) specific to the middleware/business logic as it transacts with the database.
• A host-based intrusion detection system (HIDS) consists of an agent on a host which identifies intrusions by analyzing system calls, application logs, file-system modifications (binaries, password files, capability/acl databases) and other host activities and state. An example of a HIDS is OSSEC.
• A hybrid intrusion detection system combines two or more approaches. Host agent data is combined with network information to form a comprehensive view of the network. An example of a Hybrid IDS is Prelude.
Disk encryption is a special case of data at rest protection when the storage media is a sector-addressable device (e.g., a hard disk, USB drive, Zip drive or a flash card/drive). It is a technique that allows data to be protected even when the OS is not active, for example, if data is read directly from the hardware as compared to access restrictions commonly enforced by an OS.
What are the types of Encryption?
Encryption can happen at the following levels:
1. Full Disk encryption- ideal for devices on the move like laptops, notebooks, palmtops, USB sticks.
2. Partition level encryption
3. Encrypted Containers stored in the regular file system also called as HIDDEN VOLUMES
4. File System level Encryption
Most Disk Encryption systems use a combination of the below mentioned techniques:
• Cipher Block Chaining(CBC)
• Electronic Code Book(ECB)
• Cipher Feedback(CFB)
• Output Feedback(OFB)
• Cryptographically Secure Pseudorandom number generators(CSPRNG)
• Message Authentication Codes(MAC)
Advantages of Disk Encryption:
• Ensures confidentiality of Data
• Protects data even when the OS is not in operation
• Ensures data cannot be easily accessed by unauthorized personnel.
• Makes the disk/data unusable in the event of unauthorized access.
• Encryption and Decryption is done transparently which ensures that users need not know bother about the internal actions.
• Assure that intellectual property and sensitive or legally protected information is accessible only to authorized users
• Meet regulatory compliance requirements through strong, centrally managed encryption
SSL-VPN stands for Secure Socket Layer Virtual Private Network. It is a term used to refer to any device that is capable of creating a semi permanent encrypted tunnel over the public network between two private machines or networks to pass non-protocol specific, or arbitrary traffic. This tunnel can carry all forms of traffic between these two machines meaning it is encrypting on a link basis, not on a per application basis.
It is a mechanism provided to communicate securely between two points with an insecure network in between them.
Benefits of using SSL VPN:
• Improves work force productivity since Employees and contractors can perform tasks even when not physically present in their usual work facilities.
• Easy deployment since it does not require any special client software to be installed.
• Provides more security options.
• Improved manageability due to highly configurable access control capabilities, health checks etc.
• Lowers costs because of the Increased self-service capabilities for conducting business with outside parties such as suppliers and customers. Employees can work remotely on a regular basis (e.g., IT consulting) thereby allowing the organization to maintain less office space (and save money).
• Increased self-service capabilities for suppliers improve their efficiency, yielding better-negotiated service/product rates.
• If remote access is used as part of business-continuity strategy, fewer seats may be necessary at disaster-recovery/business-continuity facilities than if all workers must work at the secondary site.
Is computer software used to identify and remove computer viruses, as well as many other types of harmful computer software, collectively referred to as malware. While the first antivirus software was designed exclusively to combat computer viruses (hence “antivirus”), modern antivirus software can protect computer systems against a wide range of malware, including worms, phishing attacks, rootkits, and Trojans.
Most of the Antivirus products today use a combination of the below mentioned techniques to combat the threats existent:
1. Signature based detection
2. Malicious activity detection
3. Heuristic-based detection
Depending on the number of machines to be scanned and the placement of the product, there are two categories of Anti-Virus products namely:
1. Desktop Level Antivirus
2. Gateway Level Antivirus