- GLOBAL THEMES
- BLOG / NEWS
For Information Communication Technology (ICT), it all about enterprise application software, that specifically design to accomplish something. It can be operating system (OS - Windows, Linux, UNIX) or business functionality application (database, email server, internet server) and specific mission critical application that for get job done for individual, team, workgroup or enterprise wide level. It can be office suite application, to department functionality application like finance and accounting, procurement, IT, marketing, sales to more specialized application like IT auditing or security assessment tool/system to infrastructure monitoring tool/system and enterprise wide business application.
Please browse through our Feature Application (Software) section for the brief before explore specific section in depth.
The world keep changing, so do technology required to adapt for the changing world requirements. One of the best way to found what we do and what we can help is thru this changing solution topics list.
Company along years acquired lot of technology know how thru real life deployment, project and services experience. As well as undergoing continuous development, skill certification to make sure competitiveness and alignment with business direction of the company. Here is the up to date list of core domain or related technology and solution we group together to form the core domain of business we are in.
VAD/VAR (Value Added Distribution / Value Added Reseller)
Business Distribution and International Trade
Focus solutions and packaged products distribution and international trade
Goverance, Risk Management and Regulatory Compliance (GRC)
Security Information and Event Management (SIEM), Network, Information & Data Security
Network, Server & Application Management, Environmental & Datacenter Management
ALM (Application Lifecycle Management)
Software and Application Lifecycle Management (ALM)
Virtualization and Cloud Management
Business Applications Management
Business Application and Technology Transformation Management
No all company name is the brand name, in particular for very large company, possess range of brands in their brand portfolio. For certain technology, E-SPIN focus on selective brands within manufacturer full range of product and build expertise on those brand solution.
Certain product, by itself is very difficult for user to buy, unless it package, bundled or complement together with other related products (in combination of hardware, software, services). The list of partners product E-SPIN is under active serving in the market.
Enterprise Security involved various related category of security system/device/appliance/hardware or virtual appliance (or application) from infrastructure, gateway to specialized security equipment or appliance.
Unified threat management (UTM) is an approach to security management that allows an administrator to monitor and manage a wide variety of security-related applications and infrastructure components through a single management console.
UTMs, which are typically purchased as cloud services or network appliances, provide firewall, intrusion detection, antimalware, spam and content filtering and VPN capabilities in one integrated package that can be installed and updated easily. UTMs for enterprise customers may also include more advanced features such as identity-based access control, load balancing, quality of service (QoS), intrusion prevention, SSL and SSH inspection and application awareness.
The principal advantage of a UTM product is its ability to reduce complexity. The principal disadvantage is that a UTM appliance can become a single point of failure (SPOF).
Next-generation firewalls integrate three key assets: enterprise firewall capabilities, an intrusion prevention system (IPS) and application control. Like the introduction of stateful inspection in first-generation firewalls, NGFWs bring additional context to the firewall’s decision-making process by providing it with the ability to understand the details of the Web application traffic passing through it and taking action to block traffic that might exploit vulnerabilities.
Next-generation firewalls combine the capabilities of traditional firewalls -- including packet filtering, network address translation (NAT), URL blocking and virtual private networks (VPNs) -- with Quality of Service (QoS) functionality and features not traditionally found in firewall products. These include intrusion prevention, SSL and SSH inspection, deep-packet inspection and reputation-based malware detection as well as application awareness. The application-specific capabilities are meant to thwart the growing number of application attacks taking place on layers 4-7 of the OSI network stack.
A Web application firewall (WAF) is a firewall that monitors, filters or blocks data packets as they travel to and from a Web application. A WAF can be either network-based, host-based or cloud-based and is often deployed through a proxy and placed in front of one or more Web applications. Running as a network appliance, server plug-in or cloud service, the WAF inspects each packet and uses a rule base to analyze Layer 7 web application logic and filter out potentially harmful traffic.
Web application firewalls are a common security control used by enterprises to protect Web applications against zero-day exploits, impersonation and known vulnerabilities and attackers. Through customized inspections, a WAF is also able to prevent cross-site scripting (XSS) attacks, SQL injection attacks, session hijacking and buffer overflows, which traditional network firewalls and other intrusion detection systems may not be capable of doing. WAFs are especially useful to companies that provide products or services over the Internet.
Network-based WAFs are usually hardware-based and can reduce latency because they are installed locally, as close to the application as possible. Most major network-based WAF vendors allow replication of rules and settings across multiple appliances, thereby making large scale deployment and configuration possible. The biggest drawback for this type of WAF product is cost.
Host-based WAFs may be fully integrated into the application code itself. The benefits of application-based WAF implementation include low cost and increased customization options. Application-based WAFs can be a challenge to manage because they require local libraries and depend upon local server resources to run effectively.
Cloud-hosted WAFs offer a low-cost solution for organizations that want a turnkey product. Cloud WAFs are easy to deploy, are available on a subscription basis and often require only a simple DNS change to redirect application traffic. Although it can be challenging to place responsibility for filtering an organization's web application traffic with a third-party provider, the strategy allows applications to be protected across a broad spectrum of hosting locations and use similar policies to protect against application layer attacks.
Intrusion prevention is a preemptive approach to network security used to identify potential threats and respond to them swiftly. Like an intrusion detection system (IDS), an intrusion prevention system (IPS) monitors network traffic. However, because an exploit may be carried out very quickly after the attacker gains access, intrusion prevention systems also have the ability to take immediate action, based on a set of rules established by the network administrator. For example, an IPS might drop a packet that it determines to be malicious and block all further traffic from that IP address or port. Legitimate traffic, meanwhile, should be forwarded to the recipient with no apparent disruption or delay of service.
An email security gateway is a product or service that is designed to prevent the transmission of emails that break company policy, send malware or transfer information with malicious intent.
Businesses of all sizes use email security gateways to prevent data loss, perform email encryption, compensate for weak partner security and protect against known and unknown malware. Solution types for email security gateways include private cloud, hybrid cloud, hardware appliances, virtual appliances and email server-based products. These solutions offer similar functions, and many providers offer more than one form.
Important considerations for choosing an email security gateway include the sophistication of the basic security functions, the additional security functions that are available, ease of management, usability and customizability, typical false positive and false negative rates, and reliance on external systems for email processing and/or storage. Some offer sandboxing capabilities to help identify unknown risks.
To protect data on computers and devices that may be accessible outside the company, it may be advisable to choose an email security gateway that provides end-to-end encryption.
A secure Web gateway is a type of security solution that prevents unsecured traffic from entering an internal network of an organization. It is used by enterprises to protect their employees/users from accessing and being infected by malicious Web traffic, websites and virus/malware. It also ensures the implementation and compliance of the organization's regulatory policy.
A secure Web gateway is primarily used to monitor and prevent malicious traffic and data from entering, or even leaving, an organization’s network. Typically, it is implemented to secure an organization against threats originating from the Internet, websites and other Web 2.0 products/services. It is generally implemented through a hardware/software gateway device/application implemented at the outer boundaries of a network. Some of the features a secure Web gateway provides include URL filtering, application level control, data leakage prevention and virus/malware code detection.
SSL Accelerator can be card attached to the server or appliance, or as dedicated appliance. It use for SSL Acceleration and offloading SSL encryption processing from the web application system.
A server accelerator card (also known as an SSL card) is a Peripheral Component Interconnect (PCI) card used to generate encryption keys for secure transactions on e-commerce Web sites. When a secure transaction is initiated, the Web site's server sends its certificate, which has been provided by a certifying authority, to the client machine to verify the Web site's authenticity. After this exchange, a secret key is used to encrypt all data transferred between sender and receiver so that all personal and credit card information is protected. This process can severely overload a server resulting in fewer transactions processed per second, which means fewer sales. The server accelerator card takes over this process, thus reducing the load on the server. Server accelerator cards support a number of security protocols including Secure Sockets Layer (SSL) and Secure Electronic Transaction (SET). The server accelerator card is installed into the PCI slot of the server. A software driver is loaded, and the server is ready to receive orders. This is much easier and more cost-effective than buying additional servers. Additional cards can be installed as the server's secure transactions increase.
There are also SSL acceleration appliances. These are external units that have server accelerator cards installed inside them. The unit is then plugged into the server. When a secure transaction is detected, the transaction is routed to the SSL acceleration unit for processing. SSL accelerator appliances can be added as needed by clustering them together.
File content security, which would cover online file shares, portable file storage and services such as SharePoint, is a significant concern for your networks. Advanced cyber attackers can breach file content security and then launch advanced attacks capable of compromising key systems in an organization.
File Content Security products help prevent, detect and respond to cyber attacks by scanning file content for signs of malicious threats. These threats might be brought into an organization from outside sources, such as online file sharing services and portable file storage devices.
Benefits of File Content Security
A data center (or datacenter) is a facility composed of networked computers and storage that businesses or other organizations use to organize, process, store and disseminate large amounts of data. A business typically relies heavily upon the applications, services and data contained within a data center, making it a focal point and critical asset for everyday operations.
Data centers are not a single thing, but rather, a conglomeration of elements. At a minimum, data centers serve as the principal repositories for all manner of IT equipment, including servers, storage subsystems, networking switches, routers and firewalls, as well as the cabling and physical racks used to organize and interconnect the IT equipment. A data center must also contain an adequate infrastructure, such as power distribution and supplemental power subsystems, including electrical switching; uninterruptable power supplies; backup generators and so on; ventilation and data center cooling systems, such as computer room air conditioners; and adequate provisioning for network carrier (telco) connectivity. All of this demands a physical facility with physical security and sufficient physical space to house the entire collection of infrastructure and equipment.
Data center consolidation and colocation
There is no requirement for a single data center, and modern businesses may use two or more data center installations across multiple locations for greater resilience and better application performance, which lowers latency by locating workloads closer to users.
Conversely, a business with multiple data centers may opt to consolidate data centers, reducing the number of locations in order to minimize the costs of IT operations. Consolidation typically occurs during mergers and acquisitions when the majority business doesn't need the data centers owned by the subordinate business.
Alternatively, data center operators can pay a fee to rent server space and other hardware in a colocation facility. Colocation is an appealing option for organizations that want to avoid the large capital expenditures associated with building and maintaining their own data centers. Today, colocation providers are expanding their offerings to include managed services, such as interconnectivity, allowing customers to connect to the public cloud.
IT operations are a crucial aspect of most organizational operations around the world. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of mechanical cooling and power systems (including emergency backup power generators) serving the data center along with fiber optic cables.
Data centers can be defined by various levels of reliability or resilience, sometimes referred to as data center tiers. For example, a tier 1 data center is little more than a server room, while a tier 4 data center offers redundant subsystems and high security.
Data center architecture and design
Although almost any suitable space could conceivably serve as a "data center," the deliberate design and implementation of a data center requires careful consideration. Beyond the basic issues of cost and taxes, sites are selected based on a multitude of criteria, such as geographic location, seismic and meteorological stability, access to roads and airports, availability of energy and telecommunications and even the prevailing political environment.
Once a site is secured, the data center architecture can be designed with attention to the mechanical and electrical infrastructure, as well as the composition and layout of the IT equipment. All of these issues are guided by the availability and efficiency goals of the desired data center tier.
Energy consumption and efficiency
Data center designs also recognize the importance of energy efficiency. A simple data center may need only a few kilowatts of energy, but an enterprise-scale data center installation can demand tens of megawatts or more. Today, the green data center, which is designed for minimum environmental impact through the use of low-emission building materials, catalytic converters and alternative energy technologies, is growing in popularity.
Organizations often measure data center energy efficiency through a metric called power usage effectiveness (PUE), which represents the ratio of total power entering the data center divided by the power used by IT equipment. However, the subsequent rise of virtualization has allowed for much more productive use of IT equipment, resulting in much higher efficiency, lower energy use and energy cost mitigation. Metrics such as PUE are no longer central to energy efficiency goals, but organizations may still gauge PUE and employ comprehensive power and cooling analyses to better understand and manage energy efficiency.
Data center security and safety
Data center designs must also implement sound safety and security practices. For example, safety is often reflected in the layout of doorways and access corridors, which must accommodate the movement of large, unwieldy IT equipment, as well as permit employees to access and repair the infrastructure. Fire suppression is another key safety area, and the extensive use of sensitive, high-energy electrical and electronic equipment precludes common sprinklers. Instead, data centers often use environmentally friendly chemical fire suppression systems, which effectively starve a fire of oxygen while mitigating collateral damage to the equipment. Since the data center is also a core business asset, comprehensive security measures, like badge access and video surveillance, help to detect and prevent malfeasance by employees, contractors and intruders.
Data center infrastructure management and monitoring
Modern data centers make extensive use of monitoring and management software. Software such as data center infrastructure management tools allow remote IT administrators to oversee the facility and equipment, measure performance, detect failures and implement a wide array of corrective actions, without ever physically entering the data center room.
The growth of virtualization has added another important dimension to data center infrastructure management. Virtualization now supports the abstraction of servers, networks and storage, allowing every computing resource to be organized into pools without regard to their physical location. Administrators can then provision workloads, storage instances and even network configuration from those common resource pools. When administrators no longer need those resources, they can return them to the pool for reuse. All of these actions can be implemented through software, giving traction to the term software-defined data center.
Data center vs. cloud
Data centers are increasingly implementing private cloud software, which builds on virtualization to add a level of automation, user self-service and billing/chargeback to data center administration. The goal is to allow individual users to provision workloads and other computing resources on-demand, without IT administrative intervention.
It is also increasingly possible for data centers to interface with public cloud providers. Platforms such as Microsoft Azure emphasize the hybrid use of local data centers with Azure or other public cloud resources. The result is not an elimination of data centers, but rather, the creation of a dynamic environment that allows organizations to run workloads locally or in the cloud or to move those instances to or from the cloud as desired.
Data Center involved lot of inter related network, server, storage and application and other complementary and supplementary system connected in single facility. How the entire Data Center availability, continuous availability, what the data center element got the problem, warning, link connectivity, network traffic connect within data center and Internet, and continuous availability of the real time report for make timely data center management decision is crucial.
E-SPIN being active in supply the system solution for Data Center for running for own internal corporate or government, or shared data center service provider for co-location / hosting provider in different size and scale of operation. Please feel free to contact E-SPIN for the project and operation requirements.
Data center virtualization is the process of designing, developing and deploying a data center on virtualization and cloud computing technologies.
It primarily enables virtualizing physical servers in a data center facility along with storage, networking and other infrastructure devices and equipment. Data center virtualization usually produces a virtualized, cloud and collocated virtual/cloud data center.
Data center virtualization encompasses a broad range of tools, technologies and processes that enable a data center to operate and provide services on top of virtualization layer/technology. Using data center virtualization, an existing or a standard data center facility can be used to provide/host multiple virtualized data centers on the same physical infrastructure, which can simultaneously be used by separate applications and/or organizations. This not only helps in optimal IT infrastructure/resource utilization, but also in reducing data center capital and operational costs.
E-SPIN being active in supply and implement server and datacenter virtualization for various use cases, from corporate and government agency server room, data center and related infrastructure for private, public and hybird cloud computing ready. Feel free to contact E-SPIN for the project and operation requirements.
A network operations center (NOC) is a place from which administrators supervise, monitor and maintain a telecommunications network. Large enterprises with large networks as well as large network service providers typically have a network operations center, a room containing visualizations of the network or networks that are being monitored, workstations at which the detailed status of the network can be seen, and the necessary software to manage the networks. The network operations center is the focal point for network troubleshooting, software distribution and updating, router and domain name management, performance monitoring, and coordination with affiliated networks.
E-SPIN being active in supply NOC for end to end unified infrastructure monitoring, or just for the network monitoring (ie Network Management System NMS), or use by Telco as Network Element (NE) monitoring to corporate and government agency grade NOC for the multinational infrastructure monitoring to national infrastructure monitoring for federal and state government, as well as military use case. Feel free to contact E-SPIN for related requirement for operation and project.
A security operations center (SOC) is a facility that houses an information security team responsible for monitoring and analyzing an organization’s security posture on an ongoing basis. The SOC team’s goal is to detect, analyze, and respond to cybersecurity incidents using a combination of technology solutions and a strong set of processes. Security operations centers are typically staffed with security analysts and engineers as well as managers who oversee security operations. SOC staff work close with organizational incident response teams to ensure security issues are addressed quickly upon discovery.
Security operations centers monitor and analyze activity on networks, servers, endpoints, databases, applications, websites, and other systems, looking for anomalous activity that could be indicative of a security incident or compromise. The SOC is responsible for ensuring that potential security incidents are correctly identified, analyzed, defended, investigated, and reported.
E-SPIN being active in supply Security Operation Center (SOC) System, from Security Information and Event Management (SIEM), Unified Security Monitoring (USM) System, Vulnerability Management (VM) and Penetration Testing system and others related equipment, device, system for the SOC or national and military grade Cyber Defence and Operation Center, both Cyber Offensive and Cyber Defensive operation. Feel free to contact E-SPIN for the related needs and requirements.
Beside Application and Networking, Server is one of the main pillar of product solution.
Please browse through our Servers section in brief below, before further explore them in depth.
A tower server is a computer intended for use as a server and built in an upright cabinet that stands alone. The cabinet, called a tower, is similar in size and shape to the cabinet for a tower-style personal computer. This is in contrast to rack server s or blade server s, which are designed to be rack-mounted .
Advantages of tower servers include:
A rack server, also called a rack-mounted server, is a computer dedicated to use as a server and designed to be installed in a framework called a rack. The rack contains multiple mounting slots called bays, each designed to hold a hardware unit secured in place with screws. A rack server has a low-profile enclosure, in contrast to a tower server, which is built into an upright, standalone cabinet.
A single rack can contain multiple servers stacked one above the other, consolidating network resources and minimizing the required floor space. The rack server configuration also simplifies cabling among network components. In an equipment rack filled with servers, a special cooling system is necessary to prevent excessive heat buildup that would otherwise occur when many power-dissipating components are confined in a small space.
A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades. Each blade is a server in its own right, often dedicated to a single application. The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional Fiber Channel host bus adaptor (HBA) and other input/output (IO) ports.
Blade servers allow more processing power in less rack space, simplifying cabling and reducing power consumption. According to a SearchWinSystems.com article on server technology, enterprises moving to blade servers can experience as much as an 85% reduction in cabling for blade installations over conventional 1U or tower servers. With so much less cabling, IT administrators can spend less time managing the infrastructure and more time ensuring high availability.
Each blade typically comes with one or two local ATA or SCSI drives. For additional storage, blade servers can connect to a storage pool facilitated by a network-attached storage (NAS), Fiber Channel, or iSCSI storage-area network (SAN). The advantage of blade servers comes not only from the consolidation benefits of housing several servers in a single chassis, but also from the consolidation of associated resources (like storage and networking equipment) into a smaller architecture that can be managed through a single interface.
A blade server is sometimes referred to as a high-density server and is typically used in a clustering of servers that are dedicated to a single task, such as:
Like most clustering applications, blade servers can also be managed to include load balancing and failover capabilities.
When enterprise server growth and expansion, it become impossible for individual check up one server at a time. A more systematic and scalable way for perform system management and monitoring is required. It help to automated most of the system check and performance monitoring, also proactive in alerting for the server storage / memory / processor reach high utlization and trigger email alert before it become the incident and bring in downtime for the system operation.
E-SPIN being active in deploy System Monitoring System (or Server, Services and Performance Monitoring System) for corporate and government agency in various scale, either is focus on server system only or integrated with storage, network system monitoring as unified monitoring solution. Feel free to contact E-SPIN for the project and operation requirements.
Server is a running "computing system" with lot of running part, component and consumable items. It involved out of warranty at risk part and component for new replacement during it lifespan of the server. In particular for those out of warranty at risk part, component need to proactive managing to prevent for the incident downtime or ongoing storage, processor, memory expansion.
E-SPIN being active in continuous supply for the enterprise application server for the various project and operation requirement. Feel free to contact E-SPIN for your ad hoc or planned server part, component or consumables.
Enterprise Data Storage Device or System can be removable, internal, or external storage (such as Network Attached Storage NAS or Storage Area Network SAN). E-SPIN supply all of them commonly as part of the system project delivery, or ongoing storage management and maintenance, together with storage resources monitoring (SRM) or package deal for infrastructure or system modernization / upgrade or replacement.
Types of storage
There are many types of data storage, with various levels of capacity, speed, cost and technology. The main types in use today include hard disk drives (HDDs), optical storage and solid-state storage (SSD)/ flash storage.
Bits and bytes are the basic measurements for computer storage. Modern capacity measurements --and their abbreviations -- to know are: kilobit (Kb), megabit (Mb), gigabit (Gb), terabit (Tb), petabit (Pb), exabit (Eb), Zebibit (Zb), Yobibit (Yb).
Software-defined storage (SDS) is a software that manages data storage resources and functionality and has no dependencies on the underlying physical storage hardware. It enable cost savings over traditional storage area network (SAN) and network-attached storage (NAS) systems that tightly couple software and hardware. Unlike monolithic SAN and NAS systems, software-defined storage products enable users to upgrade the software separately from the hardware. Common characteristics of SDS products include the ability to aggregate storage resources, scale out the system across a server cluster, manage the shared storage pool and storage services through a single administrative interface, and set policies to control storage features and functionality.
Factors contributing to the increase SDS products include the explosive growth of unstructured data, creating a greater need for a scale-out storage architecture; the availability of high-performance server hardware with multicore processors; the general acceptance of virtualization in servers, desktops, applications and networking; and the popularity of cloud technologies.
A Storage Area Network (SAN) is a specialized, high-speed network that provides block-level network access to storage. SANs are typically composed of hosts, switches, storage elements, and storage devices that are interconnected using a variety of technologies, topologies, and protocols. SANs may also span multiple sites.
A SAN presents storage devices to a host such that the storage appears to be locally attached. This simplified presentation of storage to a host is accomplished through the use of different types of virtualization.
SANs are often used to:
A SAN presents storage devices to a host such that the storage appears to be locally attached. This simplified presentation of storage to a host is accomplished through the use of different types of virtualization.
SANs are commonly based on Fibre Channel (FC) technology that utilizes the Fibre Channel Protocol (FCP) for open systems and proprietary variants for mainframes. In addition, the use of Fibre Channel over Ethernet (FCoE) makes it possible to move FC traffic across existing high speed Ethernet infrastructures and converge storage and IP protocols onto a single cable. Other technologies like Internet Small Computing System Interface (iSCSI), commonly used in small and medium sized organizations as a less expensive alternative to FC, and InfiniBand, commonly used in high performance computing environments, can also be used. In addition, it is possible to use gateways to move data between different SAN technologies.
Network-attached storage (NAS) is a type of dedicated file storage device that provides local-area network local area network (LAN) nodes with file-based shared storage through a standard Ethernet connection.
NAS devices, which typically do not have a keyboard or display, are configured and managed with a browser-based utility program. Each NAS resides on the LAN as an independent network node and has its own IP address.
An important benefit of NAS is its ability to provide multiple clients on the network with access to the same files. Prior to NAS, enterprises typically had hundreds or even thousands of discrete file servers that had to be separately configured and maintained. Today, when more storage capacity is required, NAS appliances can simply be outfitted with larger disks or clustered together to provide both vertical scalability and horizontal scalability. Many NAS vendors partner with cloud storage providers to provide customers with an extra layer of redundancy for backing up files.
In the enterprise, a NAS array can be used as a backup target for archiving and disaster recovery. If a NAS device has a server mode, it can also function as an email, multimedia, database or print server for a business. Some higher-end NAS products can hold enough disks to support RAID, a storage technology that turns multiple hard disks into one logical unit in order to provide better performance times, high availability and redundancy.
NAS product categories
NAS devices are grouped in three broad categories based on the number of drives, drive support, drive capacity and scalability.
High-end or enterprise NAS: The high end of the market is driven by businesses that need to store huge amounts of files, including virtual machine (VM) images. High-end NAS provides rapid access and NAS clustering capabilities.
Midmarket NAS: This end of the market can accomodate businesses that require several hundred terabytes of data. Midmarket NAS devices cannot be clustered, however, which can create file-system siloes when multiple NAS devices are required.
Low-end or desktop NAS: The low end of the market is aimed at small businesses who require local shared storage. Increasingly, this market is shifting toward a cloud NAS model.
Direct-attached storage (DAS) is computer storage that is connected to one computer and not accessible to other computers. For an individual computer user, the hard drive is the usual form of direct-attached storage.
In the enterprise, individual disk drives in a server are called direct-attached storage, as are groups of drives that are external to a server but are directly attached through SCSI, SATA and SAS interfaces. DAS can provide end users with better performance than networked storage can because the server does not have to traverse the network in order to read and write data. That is why enterprise organizations often turn to DAS for certain types of applications that require high performance. Microsoft, for example, recommends that Exchange installations use DAS.
In the past, direct-attached storage was often criticized as an inefficient way to manage enterprise storage because DAS storage can't be shared and it does not facilitate failover should the server crash. As virtualization has become mainstream, however, the advantages that DAS offers are once again gaining popularity.
Networking hardware, also known as network equipment or computer networking devices, are physical devices which are required for communication and interaction between devices on a computer network. Specifically, they mediate data in a computer network. Units which are the last receiver or generate data are called hosts or data terminal equipment.
Networking devices may include gateways, routers, network bridges, modems, wireless access points, networking cables, line drivers, switches, hubs, and repeaters; and may also include hybrid network devices such as multilayer switches, protocol converters, bridge routers, proxy servers, firewalls, network address translators, multiplexers, network interface controllers, wireless network interface controllers, ISDN terminal adapters and other related hardware.
The most common kind of networking hardware today is a copper-based Ethernet adapter which is a standard inclusion on most modern computer systems. Wireless networking has become increasingly popular, especially for portable and handheld devices.
Other networking hardware used in computers includes data center equipment (such as file servers, database servers and storage areas), network services (such as DNS, DHCP, email, etc.) as well as devices which assure content delivery.
In packet-switched networks such as the internet, a router is a device or, in some cases, software on a computer, that determines the best way for a packet to be forwarded to its destination.
A router connects networks. Based on its current understanding of the state of the network it is connected to, a router acts as a dispatcher as it decides which way to send each information packet. A router is located at any gateway (where one network meets another), including each point-of-presence on the internet.
On an Ethernet local area network (LAN), a switch determines from the physical device (Media Access Control or MAC) address in each incoming message frame which output port to forward it to and out of. In a wide area packet-switched network such as the Internet, a switch determines from the IP address in each packet which output port to use for the next part of its trip to the intended destination.
A network interface controller (NIC, also known as a network interface card, network adapter, LAN adapter or physical network interface, and by similar terms) is a computer hardware component that connects a computer to a computer network.
Early network interface controllers were commonly implemented on expansion cards that plugged into a computer bus. The low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard.
Modern network interface controllers offer advanced features such as interrupt and DMA interfaces to the host processors, support for multiple receive and transmit queues, partitioning into multiple logical interfaces, and on-controller network traffic processing such as the TCP offload engine.
A workstation is a computer intended for individual use that is faster and more capable than a personal computer. It's intended for business or professional use (rather than normal business or home use). Workstations and applications designed for them are used by small engineering companies, architects, graphic designers, and any organization, department, or individual that requires a faster microprocessor, a large amount of random access memory (RAM), and special features such as high-speed graphics adapters.
A desktop computer is a personal computer that is designed to fit conveniently on top of a typical office desk. A desktop computer typically comes in several units that are connected together during installation: (1) the processor, which can be in a microtower or minitower designed to fit under the desk or in a unit that goes on top of the desk, (2) the display monitor, (3) and input devices - usually a keyboard and a mouse. Today, almost all desktop computers include a built-in modem, a CD-ROM drive, a multi-gigabyte magnetic storage drive, and sometimes a diskette drive.
A laptop computer, sometimes called a notebook computer by manufacturers, is a battery- or AC-powered personal computer generally smaller than a briefcase that can easily be transported and conveniently used in temporary spaces such as on airplanes, in libraries, temporary offices, and at meetings. A laptop typically weighs less than 5 pounds (2.3Kg) and is 3 inches (7.6 Cm) or less in thickness.
A tablet is a wireless, portable personal computer with a touchscreen interface. The tablet form factor is typically smaller than a notebook computer, but larger than a smartphone.
Today, the most common type of tablet is the slate style, like Apple's iPad, Microsoft's Surface or Amazon's Kindle Fire. External keyboards are available for most slate-style tablets, and some keyboards also function as docking stations for the devices.
Other styles of tablets include:
Tablet PC operating systems and features
Consumers and businesses have a range of tablet devices and operating systems from which to choose. Collectively, tablets have made numerous technological advances and gained increasing popularity in enterprise BYOD environments.
A smartphone is a cellular telephone with an integrated computer and other features not originally associated with telephones, such as an operating system, Web browsing and the ability to run software applications.
Requirements for designation as a smartphone: