Showing posts with label Intel. Show all posts
Showing posts with label Intel. Show all posts

Tuesday, March 21, 2017

Intel Optane SSD DC P4800X Series unveiled for data centers

Intel Optane SSD DC P4800X Series unveiled for data centers

By 
Intel has unveiled a fast and responsive solid state drive for data centers. The Intel Optane SSD DC P4800X Series is designed to boost scale per server and accelerate applications.
The solid state drive provides industry leading capabilities such as high throughput, low latency, high endurance and a high quality of service. The Optane series of Solid State Drives by Intel offers good performance at low queue depth.
The drives remain responsive even under heavy load, and consistently delivers highly predictable and fast service. The drives will allow data center operators to increase the capacity of the same data centers. The Intel Optane SSD DC P4800X expands the reach of cloud computing solutions, and is designed to be used for emerging applications such as artificial intelligence, electronic trading, machine learning and medical scans.
The Intel Optane SSD DC P4800X with Intel Memory Drive Technology increases the size of memory pools, or allows a portion of the DRAM to be displaced. The drive is designed to seamlessly integrate into the memory subsystems and presents itself as DRAM to the operating system.
The drives are available immediately to Intel customers in the early ship program. More variants, in terms of capacity and form factors are expected to be available from the second half of 2017.

Wednesday, December 14, 2016

Qualcomm follows many others in pointing out vulnerabilities of electronic financial transactions in India

Qualcomm follows many others in pointing out vulnerabilities of electronic financial transactions in India

How an ATM attack works. Infographic from Symantec's blog.
By 
Qualcomm has pointed out that most banking apps and mobile digital wallets in India do not use hardware-level security measures to ensure that the financial transactions are not compromised. Qualcomm is in the process of approaching the makers of such apps to integrate hardware level security features of Qualcomm chipsets into the applications. The sandboxing approach prevents any malware from affecting financial transactions.
There has been an increased focus on the security of electronic financial transactions, ever since a malware got into the systems of Hitachi Payment Services, which provides back end services to ATM machines and Point of Sale nodes across India. 32 lakh debit cards were compromised including those issued by SBI, HDFC, YES, AXIS, BOB and ICICI.
Security experts and consultants have pointed out various holes in the electronic transaction systems in place in India. ATMs need to implement state of the art encryption. The magnetic stripe cards need to be replaced with newer EMV chip cards, a global standard created by Europay, MasterCard, and Visa. ATM transactions are vulnerable to skimming and cloning attacks because of the continued use of the magnetic stripe cards. The databases of the banks themselves have to be adequately secured.
Intel has also warned that ATM machines in India are vulnerable to malicious attacks. Intel points out that countries in the Asia Pacific region are developing and are particularly vulnerable because of old systems and machines being used. The ATM machines tend to use outdated operating systems such as Windows XP, which makes them a easier target to execute malicious attacks against. Intel has also called for securing ATM machines with multiple levels of authentication and industry standard encryption.
The humans are the weakest link in the security chain, and there is a need for banks to educate users about phishing web sites, frauds, and scam emails. This is particularly important to users who are only starting to use digital wallets and banking apps after the demonetisation. The critical login credentials of users can be compromised by someone merely glancing at the screen and the app being used in a public place, a method of low-tech hacking known as shoulder surfing.
Antivirus solutions installed by users on their devices to protect financial transactions, actually end up making online banking less secure, according to researchers from the  Concordia University in Montreal, Canada. The anti virus software intervene in the regular operations of the browser and operating system, and can be used to fool the system with fake credentials. Commonly used security services were tested, and the researchers found that they lowered the levels of security normally provided by the browsers.
The hacking collective known as Legion has warned of weaknesses in the Indian Banking System. The group has said that it has the capability of hacking into financial systems, but has chosen not to do so. Legion has also said that there have been significant breaches in the past, but Banks have not alerted their consumers about it. There is no requirement by law to disclose data breaches to customers, the onus is on the Banks to do so.
However, it is not just amateur hackers who are a threat. There are serious cybercriminals and even the largest financial networks are susceptible to attacks. Swift has confirmed that hackers trying to get into the system have succeeded multiple times, and are continuously using newer and more sophisticated techniques. Swift has warned of consistent efforts by the group that pulled of the Banladesh Bank Heist to compromise the systems.
The laws in place are outdated, and need to be tweaked taking into accounts new developments. Digital wallets in India currently have no prescribed security standards, and are free to implement their own measures. There are no laws to hold the digital wallets responsible if something goes wrong during financial transactions.
Qualcomm senior director product management Sy Choudhury lauded the efforts by Aadhaar. The India Stack, which is a collection of APIs, which are ready to be integrated with applications and services. One of the aims of the India Stack is to ensure smooth and secure financial transactions. Unified Payment Interface (UPI), Aadhaar linked biometric identification, Unique Identification Authority of India (UIDAI), e-KYC, Aadhaar Enabled Payments System (AEPS) are all elements of the stack, that can improve security of transactions.

Wednesday, November 30, 2016

Intel creates Automated Driving Group following partnership with Delphi and Mobileye

Intel creates Automated Driving Group following partnership with Delphi and Mobileye

Image Credits: REUTERS
By 
Intel has created a new organisation dedicated to autonomous driving technologies, called Automated Driving Group (ADG). The move follows close on the heels of an announcement that Intel chips would be used in self-driving cars being made by Delphi and Mobileye. ADG will be creating the next generation of driver assist systems and other solutions needed for autonomous vehicles.
doug-davis_1
Doug Davis, Senior Vice President and General Manager of the ADG group
Intel has appointed Doug Davis as the Senior Vice President and General Manager of the ADG group. Davis has been working at Intel for over 30 years, and has been at the forefront of some of the most disruptive technological innovations at the company. Davis started his career at Intel in the Military Division, and has worked with the Embedded Microcomputer Division, the Network Processor Division, the Infrastructure Processor Division and the Embedded and Communications Group over the course of his career. Davis has extended his retirement to take up the position. Most recently, Davis was the General Manager of the Internet of Things group.
Davis will be assisted by Kathy Winter. Winter was previously the Vice President of Software and Services for Automated Driving at Delphi. Winter is now the Vice President and General Manager of the Automated Solutions Division (ASD) at Intel. Winter will be working on solutions for automated driving. Winter was recognised for the first cross country drive in an autonomous vehicle in 2015. Before Delphi, Winter had worked in senior positions at Motorola Mobility.

Friday, November 4, 2016

Intel Security outlines strategy for protecting new digital economy

Intel Security outlines strategy for protecting new digital economy

By 
Intel Security has announced a new strategy by means of an enhanced unified defense architecture, which has been designed from the ground up, to give more power to organisations. It has done this to increase the effective protection of today’s new digital economy which consists of three main factors: trust; time and money.
Intel says that world’s economy is no longer a physical one and that the growing number of interconnected networks and systems puts everyone (including organisations) at the risk of cybercriminals forcing the users and businesses to go on a defensive.
“Cybercriminals are forcing cybersecurity companies to re-draft the rules of engagement for defending the civilised world; to effectively counteract them, we have to abandon old security playbooks to become more unpredictable and collaborative and make cyber defense a priority,” said Chris Young, senior vice president and general manager of Intel Security Group. “Our strategic charter is simple, yet disruptive: integrate, automate and orchestrate the threat defense lifecycle to drive better security outcomes – ultimately reducing more risk, faster and with fewer resources.”
Intel Security’s new unified architecture is basically enabled by four key integrated systems: Dynamic Endpoint, Pervasive Data Protection, Data Center and Cloud Defense, and Intelligent Security Operations. All of these systems work together in a deeply integrated manner to multiply the effectiveness of the system as whole.
Along with the above, Intel Security also announced its intent to open the McAfee Data Exchange Layer (DXL) to enable the industry with a concrete solution to disrupt the cyberattacker’s advantage.

Wednesday, October 26, 2016

Bolstering the last mile with Multipath TCP



Authors:

Robert Skog, Dinand Roeland, Jaume Rius i Riu, Uwe Horn, Michael Eriksson

The last mile is the part of the telecommunications network that physically reaches user premises, either by wireless technology (cellular networks) or wireline technology such as cable, fiber or digital subscriber line (DSL). The achievable data rates for each of these access technologies vary, but in many cases the bandwidth depends on the distance between the access termination point in the service provider network and the device in the user premises. This means that no matter how fast the service is up to the access termination point, the users who are farthest away from it will experience significantly slower service than the ones who are closer.

For example, although the most recently standardized DSL technologies allow bitrates of up to 1Gbps, most subscribers today are still getting less than 20Mbps. The reason for this is the dependency between the achievable bitrate and the length of the copper line connecting a household to the DSL access multiplexer (DSLAM). As Figure 1 shows, if the distance between the user premises and the DSLAM exceeds 2km, DSL speed falls quickly below 20Mbps. The obvious solution is to reduce the length of the last mile. If the copper line distance can be reduced to less than 250m, new technologies and standards such as vectoring and G.fast will allow bitrates of about 1Gbps. However, reducing the copper line distance is costly because it requires the deployment of more street cabinets connected by fiber lines to the backbone network. To get around this, some fixed broadband service providers have started to launch offerings that combine DSL with LTE as a cheaper way to boost the bitrate for DSL customers than deploying more fiber-connected DSLAM street cabinets.

Figure 1: Speed versus copper line length between user premises and the DSLAM for the most widely deployed DSL technologies

Similarly, LTE/Wi-Fi aggregation is useful as a booster for mobile phones. Some operators have started deploying solutions that combine Wi-Fi and LTE accesses in areas such as shopping malls and big event venues as a means to increase user capacity while at the same time offloading their cellular network traffic to the fixed networks when possible.
Technologies for access aggregation

Many standardized aggregation technologies only support use cases in which links using the same access type are aggregated. This is known as bonding, and examples include the bonding of several Ethernet links, or of two DSL access links. Notable exceptions are IP Flow Mobility and multiple-access PDN connectivity – both defined by 3GPP – which are able to support aggregation of multiple access types[1]. However, these two technologies have gained little traction because their introduction on mobile devices would require a significant implementation effort, and even the apps running on them would require modifications.

Multipath TCP as specified by the IETF[2] can be deployed in existing networks more easily than other alternatives because it is an evolution of TCP[3] – the most widely used protocol in the internet today. This guarantees interoperability between equipment from different vendors. Like TCP, Multipath TCP works on top of IP. Since IP is the foundation of all internet protocols, Multipath TCP can be used across all kinds of access networks, providing a rich toolkit that supports access aggregation for use cases such as bandwidth aggregation, reliability and seamless connectivity. In addition, there is an open source reference implementation for Multipath TCP that is continuously developed and improved by a large community of developers[4].

Figure 2 shows two access aggregation scenarios enabled by Multipath TCP. The first scenario shows DSL/LTE aggregation, where an existing DSL connection is combined with LTE. If the DSL link provides 12Mbps and the LTE link provides 8Mbps, the aggregated bandwidth that can be obtained via Multipath TCP is roughly 20Mbps.

Figure 2: Examples of access aggregation enabled by Multipath TCP

The second scenario shows LTE/Wi-Fi aggregation, which functions according to the same principle. Together with a mobile device manufacturer, Ericsson has performed successful field trials in public LTE and Wi-Fi networks using commercially available mobile devices. Only the firmware was modified to support Multipath TCP.

Although the benefits of Multipath TCP are often presented in the context of two different access networks, there is no limit in Multipath TCP that would prevent the use of three, four or more access networks. The access networks could even be operated by different service providers, which is an additional benefit for use cases aiming for improved resiliency.
Aggregating bandwidth

Bandwidth aggregation refers to the ability of Multipath TCP to combine the bandwidth of several links into one logical connection. Figure 3 shows an example of how Multipath TCP adds together the bandwidth of DSL and LTE. This is equally valid for the LTE + Wi-Fi scenario depicted in the bottom part of Figure 2.

Figure 3: DSL and LTE bandwidth aggregation with Multipath TCP

The bandwidth aggregation features of Multipath TCP apply to both downlink and uplink directions. As a result, Multipath TCP also helps to improve uplink speeds, which are only a fraction of the downlink speed in existing (asymmetric) DSL consumer services. For instance, the uplink speed over a 6Mbps asymmetric DSL connection is usually below 1Mbps. Aggregating DSL and LTE makes it possible to boost the uplink speed to 10Mbps and more.

Examples of services that would benefit from the Multipath TCP bandwidth aggregation are:
A user watching HDTV (high definition TV) over a DSL access connection that is not capable of providing enough bandwidth – Multipath TCP can be used to schedule surplus traffic over LTE (particularly useful for the downlink).
A user uploading documents or photos to a server – when the DSL uplink capacity is exceeded, Multipath TCP can add LTE capacity for quicker upload.
Improving reliability

In the context of access aggregation, reliability refers to the ability to maintain data exchange within a session, even if one or several access links become unavailable. Figure 4 compares the behavior of a traditional wan backup solution with that of a solution based on Multipath TCP. Traditional solutions cannot react quickly to the disappearance and reappearance of access links. Whenever a link disappears, sessions break and need to be reestablished, which can lead to data loss and the need for human intervention.

Figure 4: Improved connection resiliency with Multipath TCP

Multipath TCP is able to react more quickly to access links disappearing and reappearing. And as long as at least one access link is up and running, a Multipath TCP enabled session will continue without interruption – albeit at a lower bitrate. Likewise, if an access link reappears, the bitrate goes up. The connection always runs at an optimal speed in relation to the availability of the links involved.
Achieving seamless connectivity

The concept of seamless connectivity is related to reliability, referring more specifically to the ability of Multipath TCP to switch from one access to another without having any impact on the application. A typical use case would be a session started over Wi-Fi. If the mobile device leaves Wi-Fi coverage and enters mobile broadband coverage, the session will break and need to be reestablished. This can be quite annoying and time consuming for the user, especially if two-factor authentication is involved. With Multipath TCP, the session does not get interrupted due to the change of access.

Changing from one access to another can also be triggered by service provider policies. For example, a service provider could have a policy to use LTE by default, but move some traffic to Wi-Fi when there is good coverage and available capacity. Or, alternatively, the service provider could set a policy where Wi-Fi is used by default and LTE is used to provide wide-area coverage. In all cases, the use of Multipath TCP prevents sessions from being interrupted if and when access systems change.
How Multipath TCP works

TCP[3] is one of the main protocols in the IP suite, providing a reliable means of communication between two endpoints. Once a TCP connection has been set up, both endpoints can send a data stream to each other. TCP is designed to cope with data that is damaged, lost, duplicated or delivered out of order. Furthermore, it provides a means to perform flow control. Upon receiving data, the receiver sends an acknowledgment (ACK) back to the sender. Such an ACK contains a “window,” which indicates the maximum number of bytes the sender is allowed to transmit before receiving further permission. This way, the receiver controls the amount of data transferred by the sender. Finally, the receipt or non-receipt of ACKs guides the TCP Congestion Control Algorithm (CCA) to determine the pace at which data may be sent.

Today, many endpoints have multiple data communication interfaces and therefore multiple IP addresses. For example, a laptop is often equipped with both a wired and a wireless interface, and a smartphone often has the capability to use multiple wireless communication technologies. Using regular TCP, these devices are capable of establishing multiple simultaneous TCP connections, with each connection tied to one specific IP interface. In other words, each TCP connection is bound to a single path defined by the IP addresses of the connection’s endpoints. Note, however, that a path is defined here in terms of endpoint identifiers; it is not the same as the route that individual packets take on their way from one endpoint to the other.

Multipath TCP [2] is a set of extensions to standard TCP that allows connections to use multiple paths simultaneously. Multiple regular TCP connections, also known as subflows, are aggregated into a single Multipath TCP connection. Figure 5 compares the protocols stack of regular TCP with that of Multipath TCP.

Figure 5: Protocol stack for TCP and Multipath TCP

In regular TCP, an application initiates communication by opening a connection via an application programming interface (API) provided by the operating system. The TCP layer communicates in its turn with the IP layer. In Multipath TCP, the TCP layer has been extended. Upwards, the Multipath TCP layer exposes an interface that is perceived as regular TCP by the application. Downwards, the Multipath TCP layer may set up multiple regular TCP connections. These may be bound to different IP layers. In Figure 5, the host is equipped with multiple data communication interfaces. Each one is associated with its own IP address. The Multipath TCP layer aggregates the multiple TCP connections into a single Multipath TCP connection. The application does not need to be aware of which protocol stack is used.

Figure 6 shows an example of how a Multipath TCP connection can be established. It starts with the setup of a first subflow (steps 2-4). These steps consist of a three-way handshake, similar to the process in regular TCP. The only difference for Multipath TCP is that an MP_CAPABLE option is used in the TCP header. With this option, the device indicates to its peer that it is Multipath TCP capable and wants to use it (step 2). If the peer is also able to use Multipath TCP, it replies with a similar capability indication (step 3). As part of the three-way handshake, the endpoints also exchange security keys. After setting up the first subflow, both endpoints can exchange data over the connection (steps 6–7).

Figure 6: Establishment of a Multipath TCP connection

Once a Multipath TCP connection has been established, each endpoint may initiate the setup of an additional subflow. In the example shown in Figure 6, the device has two network interfaces. Each interface is associated with its own IP address. Here, the device takes the initiative to establish a second subflow via its second interface. Again, a three-way handshake is used to achieve this. But this time the option MP_JOIN is used to indicate that this is a new subflow that is to be joined to an existing Multipath TCP connection. A token (step 9), derived from the earlier received key (step 3), is used to correctly bind the two subflows. Additional authentication information is also exchanged to ensure the authenticity of both endpoints.

Once the new subflow has been established, both endpoints can use it to send and receive data. In our example, the device sends data to its peer (step 14). Note that the device needs to take an active decision regarding which subflow to use (step 13). How this decision is made is not defined in the standard, which gives the designer the freedom to implement the scheduling policy that is most appropriate for each case.

Subflows may come and go for various reasons, such as connectivity problems. To ensure reliable, in-order delivery to the application, Multipath TCP uses a data sequence number that is carried in a Data Sequence Signal option (steps 6-7 and 14-15). Aside from ensuring in-order delivery, this number can be used in combination with the sequence numbers used by regular TCP at subflow level to execute retransmissions on different subflows, if needed. Multipath TCP can also synchronize congestion control over subflows in order to avoid unfairness to single-path users[5].

An additional benefit of Multipath TCP is that it can be introduced incrementally. In particular, if the receiver of the first subflow’s TCP syn does not support Multipath TCP, it will simply discard the capability option. It will reply with a TCP SYN ACK, but without adding the MP_CAPABLE option, and the connection will be made with standard TCP.

User space

In computer design, a distinction is made between kernel space and user space. Kernel space is where the operating system code runs – hardware device drivers, memory management and protocol stacks, for example. User space is where ordinary programs run. In designing our Multipath TCP solution, we chose to place a protocol stack (MPTCP) in user space rather than in kernel space. This results in faster packet processing, because packets don’t need to travel from kernel space to user space. Instead, they go directly from the hardware interface to user space.
The proxy-based approach to Multipath TCP access aggregation

Proxies make it possible to achieve the benefits of Multipath TCP for access aggregation without requiring Multipath TCP support in all end devices and internet servers. An additional benefit of proxies is that they give the service provider control over the scheduling of the traffic. In this way, service providers can ensure that the available access alternatives are used in the most efficient and cost-effective way. The use of proxies has already been recognized by the industry, and work has been done and published by the Broadband Forum defining the architecture[6]. Ericsson is contributing actively to this work.

Figure 7 provides a high-level overview of the proxy-based approach to Multipath TCP access aggregation. There are two proxies involved: a network proxy and a customer premises equipment (CPE) proxy. The network proxy is located in the service provider’s network and converts TCP sessions from internet servers into Multipath TCP sessions that operate across multiple access networks. Similarly, the CPE proxy converts a Multipath TCP session with the network proxy back into a TCP session.

Figure 7: Proxy-based approach for Multipath TCP access aggregation

End devices with built-in Multipath TCP support could also connect directly to the network proxy. There are already some smartphones on the market with built-in Multipath TCP support that can be used to aggregate LTE and Wi-Fi. Ericsson has run tests that prove the feasibility of this setup in public LTE and Wi-Fi networks.

The proxies can be used to enhance standard Multipath TCP via additional traffic-steering capabilities that are optimized for the specific application scenario. For instance, a service provider might want to ensure that the DSL pipe is filled first before using the scarcer LTE bandwidth. This traffic-steering approach is often referred to as a cheapest-link-first policy. Service providers might also want to define policies to prevent or allow the use of heterogeneous access for specific services, or to force selected services to use only one of the available access links. All of this is possible with Multipath TCP, as the IETF standard does not prescribe a specific traffic-steering method.

In an implementation, the optional CPE proxy will be integrated in a CPE such as a home or office router. This setup can be used in a residential or enterprise setting, and when it is in place, all devices connecting to the router will receive a faster and more reliable internet connection. Traffic steering can also be applied at the CPE proxy level to control the traffic in the uplink direction.

Ericsson is partnering with CPE vendors and chipset manufacturers such as Intel to ensure efficient implementation of the Multipath TCP CPE proxy. We also offer a reference design and a test lab environment for CPE vendors.
Carrier-grade Multipath TCP proxy implementation

One important requirement for a Multipath TCP proxy in the service provider network is the ability to support a high-performance, carrier-grade IP solution for traffic aggregation. Figure 8 illustrates how Ericsson’s solution can be used as a Multipath TCP network proxy, which can be deployed in either a virtualized or non-virtualized environment.

Figure 8: Multipath TCP network proxys

All components – including Multipath TCP functionality – are implemented in user space[7] to meet the capacity requirements. The TCP traffic can be accessed directly from hardware using a Data Plane Development Kit (DPDK)[8]. The packet distribution function is responsible for sending traffic to the Multipath TCP protocol stack, located in the user space on one or several central processing unit (CPU) cores.

The Ericsson solution implements Multipath TCP functionality as specified by the IETF [2], combined with a specifically designed TCP CCA called TCP RNA (Radio Network Aware). TCP RNA is designed to utilize the mobile ran in an optimal way, and solves the equations for the correct congestion window by using measurements of the speed of the arriving TCP ACKs in conjunction with reactions of lost TCP segments. The benefits of TCP RNA are:
maximum utilization of available bandwidth for both uplink and downlink
reduced retransmissions using traffic shaping
controllable latency
avoiding bufferbloat.

This solution is highly configurable and can be tailored to support multiple Multipath TCP use cases per access network. The traffic-steering settings are policy driven. One configuration example is to send Multipath TCP traffic on one preferred subflow, such as the DSL link. When the DSL link has reached its limit, any surplus Multipath TCP traffic will be sent on another subflow – most commonly the LTE link.

Another configuration example aims to optimize radio usage on a system-wide level. If Multipath TCP traffic is sharing radio spectrum with other non-Multipath TCP traffic – from LTE-only mobile phones, for example – it might be preferable to avoid excessive use of the LTE link from Multipath TCP traffic. This can be achieved by configuring the TCP RNA for the LTE link to behave like background delivery. The result is that Multipath TCP traffic will back off when TCP RNA detects that the cell is congested, in favor of LTE-only traffic.

At times, it might be desirable to configure Multipath TCP for maximum throughput – when combining LTE with Wi-Fi access for fast file download, for example. In such a scenario, the solution can be configured to use round-trip-time-based (RTT-based) traffic steering. Such traffic steering is achieved by sending data over the subflow with the lowest RTT. If that link reaches its capacity limit and there is more data to send, the rest of the data is sent over the other subflow. If one subflow can handle all the data, only the link with the lowest RTT will be used.
Conclusions

Access aggregation is a viable option for service providers to boost bandwidth across the last mile in areas where it is too costly to increase the capacity of legacy access. Typical access aggregation scenarios are the combination of DSL with LTE or the combination of LTE with Wi-Fi. Multipath TCP, as specified by the IETF, is ideal for access aggregation in the last mile, as it is able to boost bandwidth significantly, while simultaneously increasing reliability and ensuring seamless connectivity.

Multipath TCP comes as a set of extensions to standard TCP. It leverages all of the benefits of TCP such as fairness, flow control and reliability, as well as allowing the use of multiple paths through a network simultaneously. Multipath TCP proxies allow service providers to use Multipath TCP for access aggregation without the need for end devices and internet servers to be aware of it.

Ericsson has created a Multipath TCP proxy that is tailored to the specific needs of service providers. It is carrier-grade, optimized for high traffic throughput and allows service providers to implement traffic-steering policies for the use of available access networks in the most cost-effective and efficient way.

Terms and abbreviations
ACK – ACKnowledgment
CCA – Congestion Control Algorithm
CPE – Customer Premises Equipment
CPU – Central Processing Unit
DPDK - Data Plane Development Kit
DSL - Digital Subscriber Line
DSLAM - DSL Access Multiplexer
IETF - Internet Engineering Task Force
MFDN - Media First Delivery Node
RNA - Radio Network Aware
RTT - Round-Trip Time
TCP RNA - TCP Radio Network Aware
VDSL - Very high-speed DSL

Wednesday, October 19, 2016

Intel beats Wall Street earnings expectation but current quarter forecast disappointing

Intel beats Wall Street earnings expectation but current quarter forecast disappointing

Intel Corp reported better-than-expected quarterly earnings and revenue, boosted by improving PC demand and growth in its data centre and cloud businesses, but its revenue forecast for the current quarter disappointed Wall Street. The world’s largest chipmaker’s shares were down 5.3 percent at $35.75 in after-hours trading on Tuesday.
Intel said it expects fourth-quarter revenue of $15.7 billion, plus or minus $500 million. Analysts on average were expecting $15.86 billion, according to Thomson Reuters I/B/E/S. “This is below the average seasonal increase for the fourth quarter as we expect the worldwide PC supply chain to reduce their inventory,” Executive Vice President Stacy Smith said on a conference call with analysts.
Last month, Intel raised its third-quarter revenue forecast for the first time in more than two years, citing improving PC demand. The company’s decision to raise third-quarter forecast moved investors’ expectations to

Sunday, October 16, 2016

Consortium of tech companies including IBM and Google to take on Intel with new processor interface

Consortium of tech companies including IBM and Google to take on Intel with new processor interface

Image Credit: REUTERS
Technology giants IBM Corp, Google and seven others have joined hands to launch an open specification that can boost datacenter server performance by up to ten times, to take on Intel Corp. The new standard, called Open Coherent Accelerator Processor Interface (OpenCAPI), is an open forum to provide a high bandwidth, low latency open interface design specification.
The open interface will help corporate and cloud data centers to speed up big data, machine learning, analytics and other emerging workloads. The consortium plans to make the OpenCAPI specification available to the public before the end of the year and expects servers and related products based on the new standard in the second half of 2017, it said in a statement.
Intel, the world’s largest chipmaker, is known to protect its server technologies and has chosen to sit out of the new consortium. In the past also, it had stayed away from prominent open standards technology groups such as CCIX and Gen-Z. “As artificial intelligence, machine learning and advanced analytics become the price of doing business in today’s digital era, huge volumes of data are now the norm,” Doug Balog, general manager for IBM Power, told Reuters.
“It’s clear that today’s datacenters can no longer rely on one company alone to drive innovation,” Balog said.
Reuters

Wednesday, October 5, 2016

Apple edges out Google as the world's most valuable brand for the fourth consecutive year





By Rob Thubron on October 5, 2016, 12:45 PM




Its smartphone sales may be slipping, the iPhone 7 has been criticized for looking overly similar to last year’s model (and having no 3.5mm headphone jack), and analysts predict it will sell fewer smartwatches this year than in 2015. But despite all this, Apple has been named the most valuable brand in the world for the fourth year in a row.

Brand consultancy firm Interbrand placed Apple at number one with a brand valuation of $178.1 billion, a five percent increase from last year. Rival Google sits behind the Cupertino company with a brand valuation of $133.2 billion, eleven percent higher than in 2015.

Technology companies take up ten of the top twenty places on the list. Microsoft sits behind Coca-Cola in fourth place with a $72.7 billion brand value, up eight percent over the last 12 months. Further down the top ten is IBM; it may have dropped nineteen percent from last year, but the industry giant's $52.5 billion brand value places it at number six.

A fourteen percent increase in 2016 puts Samsung beneath IBM at number seven with a brand value of $51.8 billion, while Amazon, which saw its stock pass the $800 milestone for the first time last month, is at number eight. The online retailer's brand value increased a massive 33 percent this year to $50.3 billion.

The other tech firms in the top 20 are Intel at fourteen ($36 billion, +4 percent), Facebook at fifteen ($32.5 billion, +48 percent), Cisco at sixteen ($30.9 billion, +4 percent), and Oracle at seventeen ($26.5 billion, -3 percent).



Two companies have joined the list this year: Tesla at number 100 ($4 billion) and Dior at 89 ($4.9 billion).

Facebook and Amazon experienced the biggest growth in brand value across 2016 – 48 percent and 33 percent, respectively.

Writing about Apple’s success, Interbrand noted: “Apple shows how ecosystems drive value. Analysts have often pointed out that ‘Apple has superior products.’ While true, this opinion undersells the brilliance of Apple’s functionally- integrated model. Its software, hardware, and touchpoints are connected not just by beautiful design aesthetics, but by a level of interoperability that justifies the Apple premium and discourages defections to another platform. And the more data you share, the more personal it becomes— adding new devices is painless and the thought of switching increasingly unpromising. Apple is the Alpha of Cohesiveness in full effect.”

Tuesday, October 4, 2016

Apple is expected to refresh MacBook Pro at the end of this month

Apple is expected to refresh MacBook Pro at the end of this month

By 
Apple is expected to refresh MacBook Pro according to reports by PetaPixel. This refresh will be the first significant overhaul of Apple’s MacBook Pro laptop lineup in last four years as reported by Bloomberg. There have been numerous rumours about Apple expected to refresh Apple MacBook Pro at the end of this month with no concrete evidence. Earlier rumours stated that the refresh is slated for late 2016, after the Apple iPhone 7 event.
Some of the reports point to an imminent early October release while others point to an end of October release. According to MacRumours, the launch date for the updated version will be after 7 October as the company is aiming to

Friday, September 23, 2016

Intel 600p Series SSD Review

We talked about the emerging entry-level NVMe SSD category in the Patriot Hellfire review and Samsung 960 EVO Preview (with the OEM PM961). Today we go straight to ground zero with the Intel 600p that puts entry-level NVMe at our fingertips.
The Intel 600p's low price made immediate waves when online retailers first posted the new SSDs. The 900p retails for significantly less than other shipping NVMe SSDs, and at the time of writing, there have been several glowing reviews of the entry-level NVMe SSD that fanned the flames. It appears the 600p is steadily moving to cult-like status in some circles, but this is not a resurrection of the Celeron 300A.
There are always trade-offs with any entry-level product, and it's important to find which corners the manufacturer cut. The Intel 600p and the 6000p (it's professional cousin with vPro and full disk encryption) both have significantly lower performance than other NVMe SSDs. Lower performance is the kind of attribute that many will point to as the reason why the drive is so cheap. Sadly, performance wasn't the only sacrifice Intel made to deliver the low price point. We dove a little deeper into the details, especially the endurance restrictions, and came away feeling like Intel cut the 600p in half. Before we get to that, let's look at how the 600p came to be.
Intel and Micron, who jointly produce NAND in the IMFT partnership, established a relationship with Silicon Motion, Inc., a third-party SSD controller vendor. SMI secured design wins with its low-cost, low-power 4-channel SSD controllers. The SM2260 is SMI's first high-performance 8-channel controller, and it is only the second controller designed specifically for IMFT's 1st generation 3D NAND.
Micron has a close working relationship with Marvell, and the company uses its Dean 4-channel controller for the Crucial MX300 mainstream class SSD. The two companies continue to collaborate, but SMI's low-cost controllers power many of Crucial's entry-level SSDs.
Intel's predicament is a little more complicated. For several years, we all thought of Intel as an SSD controller company, which is a perception that dates back to the company's first SSD products. What we didn't know was that LSI Corporation, a fabless semiconductor vendor, handled the hardware design (later SandForce, too) while Intel contributed the firmware. Intel and LSI mingled together publicly with SandForce products until LSI CEO Abhi Talwalkar sold the company to Avago, and Intel's controller design house went with it.
In recent years, Avago acquired several companies and radically increased the price of existing products - some have increased by as much as four times. Avago continues to design SSD controllers, but Intel transitioned to SMI for its latest SSDs.
The 600p uses the SMI SM2260 controller in tandem with Intel firmware. By no means did either company plan to make the 600p the first entry-level NVMe SSD. The 600p was designed to be a premium product to take on Samsung's ever-increasing line of high-performance M.2 NVMe SSDs.Micron canceled the TX3 (with 3D MLC) when the combination of the SMI controller and IMFT flash failed to deliver high performance, but Intel forged ahead with its first entry-level NVMe SSD.

Warranty And Endurance

All four Intel 600p SSDs ship with a five-year warranty and have the same 72-terabyte TBW endurance rating. The TBW rating means the drive can only absorb up to 72TB of data during its lifetime. This is only the second time we've encountered an entire product series that employs a blanket TBW rating. To put this into perspective, the 128GB OCZ VX500 from a recent review also features a 72 TBW rating, but the 1TB VX500 offers up to 592 TBW. 72TB of data writes may or may not sound like a lot to you, but this is a very low endurance rating compared to other 1TB SSDs.
How Intel's consumer SSDs expire once you surpass the endurance threshold is troubling. In an almost over-zealous move to protect user data, Intel instituted a feature on many of its existing SSDs that automatically switches it to a read-only mode once you surpass the endurance threshold (measured via the MWI SMART attribute). Surprisingly, the read-only state only lasts for a single boot cycle. After reboot, the SSD "locks" itself (which means you cannot access the data) to protect the user from any data loss due to the weakened flash. The operating system typically generates error notifications when an SSD switches into a read-only mode, so most users will restart without being aware that the SSD will be inaccessible upon the next reboot. The process to recover the data is unclear.
Higher-capacity SSDs provide more endurance than smaller models. It is a well-documented fact that most flash can easily outlast the endurance ratings, and Intel is probably erring on the side of caution with the low endurance ratings on its 1TB SSDs. In either case, the 600p's blanket endurance rating may impose a somewhat unneeded restriction on the potentially more-endurant high capacity 1TB SSDs. The Intel 335 Series famously used this technique, but in contrast, the 240GB 335 model delivered 10x the endurance of the new 600p NVMe SSD before retiring itself. We reached out to Intel to verify if the 600p also has this feature, but have yet to receive a response. (EDIT: Intel confirmed the nature of the endurance limit and provided an official response outlining the recovery procedure).
The 600p doesn't utilize a direct-to-die write scheme, so the first write goes to the SLC buffer before the controller flushes it to the TLC area for "long term" storage. This process doubles write amplification, thus magnifying the discouraging endurance situation.

Technical Specifications

Intel plans to bring the 600p to market in four capacities. The 128GB, 256GB and 512GB products are shipping now, but the 1TB model will enter the market later in 2016. Intel wrote the firmware for the SM2260 controller and paired it with 384Gbit 3D NAND flash running in 3-bit per cell (TLC) mode. Intel was only able to fit three NAND flash packages on the 600p due to the size constraints of the M.2 2280 single-sided form factor. As a result, the 8-channel controller only operates in 6-channel mode.
The odd 384Gbit (48GB) NAND die leads to somewhat exotic capacity points. We've already explored those with the MX300 that ships in 275GB, 525GB, 1050GB and 2TB capacities. Instead of choosing the same path, Intel adhered to standard sizes and rolled the extra capacity into a dedicated SLC cache area. The larger-than-normal SLC buffers should deliver exemplary performance, but that isn't always the case. 
The 600p supports an internal AES 256-bit hardware encryption engine but does not support accelerated eDrive or other encryption services in Windows. For those features, you need the up-scale Pro 6000p model that also uses vPro encryption technology. In the SanDisk X400 review, we stated that companies should stop producing separate models for encrypted disks, and we still stand by that line of thought. At least Intel has a separate product line name to specify the encrypted models, unlike the X400. 

Performance

The Intel 600p delivers better-than-SATA performance, which is a new marketing tagline for the next progression beyond hard-disk-replacement. Performance scales as the capacity increases, but the numbers are so far apart that we need to pick them apart by capacity rather than make blanket statements.
The 600p series uses a fixed-capacity SLC buffer that increases in size with each leap in capacity. The SSD writes incoming data to the SLC-programmed buffer to decrease media wear and increase performance. The 600p does not use a direct-to-die algorithm, which bypasses the SLC cache when the buffer is full like most other SSDs. As a result, when the buffer is full the incoming data must wait while the SSD expunges existing cached data. The "up to" performance is a measurement of the SLC buffer speed, and if you need to write data that exceeds the size of the buffer the performance will fluctuate. We'll explore that characteristic in our tests.
The 600p 128GB tackles sequential reads at 770 MB/s, but the sequential write performance is only 450 MB/s (less than most mainstream SATA 6Gb/s products). Intel spec’d the random performance at up to 35,000/91,000 read/write IOPS, and it has a 4GB SLC buffer.
The 600p 512GB increases sequential read performance up to 1,775 MB/s. The sequential write performance also increases to a peak of 560 MB/s, which is the same rating we often see attached to premium SATA 6Gb/s SSDs. The random performance is 128,000 IOPS for both reads and writes, and the 512GB has a spacious 17GB SLC buffer. 

Pricing And Accessories

The 600p is cheap compared to other NVMe SSDs on the market. The series debuted with low MSRPs, but if you shop around, it's possible to find even lower prices. Newegg has the 128GB in stock at $66. The 256GB model shows up at $110 and the 512GB at $199. Amazon has the 512GB at $166.

The Intel 600p works with Intel's SSD Toolbox software, but the Toolbox doesn't support all of the features yet. The media wear-out indicator doesn't work, nor does the Optimize feature. Intel updated the software on 6/21/2016. The 600p also doesn't have an Intel NVMe driver. We suspect the performance will increase slightly with a custom NVMe driver for Windows. The embedded Windows NVMe driver works with the 600p, and that is what we used for testing.

Intel SSD owners also gain access to the company's data migration software that will clone the data from an existing drive to a new SSD. At the time of writing, the supported hardware list does not list the 600p as a compatible product, but we expect that to update in November 2016 (per the fine print at the bottom of the product brief). 

Packaging

Intel assumes retail shoppers will know what the "SSD6" branding means on the front of the retail package. The back of the box features more information, but there are very few details about the 600p series (the one you are buying). The description reads:
"6 Series Added Boost For Better Responsiveness"
Given our findings, maybe Intel doesn't want you to know too much about this product other than the fact it is cheap. This is where things get really sticky. The Intel SSD 600p product page doesn't list any endurance ratings and neither does the official product brief. As a computer hardware journalist, I learned the thing to look for is what the company doesn't tell you. If you want to find the full specifications list for the 600p, you need to look at the ARK pages, which reveal the low endurance rating.
A paper manual ships with the drive and outlines the installation procedure and the warranty terms.

A Closer Look

The drive is a simple design with a controller, DRAM and NAND packages. The 256GB and 512GB drive look identical and are the two we have in-house for testing. Both models use three NAND flash packages on one side. Samsung, and other M.2 2280 products from other companies, use two NAND packages on one side, but Intel managed to fit more in the same footprint. The three NAND packages allow the SMI controller to run in 6-channel mode while most other M.2 controllers run in 4-channel mode.

Related Posts Plugin for WordPress, Blogger...