Thursday, February 14, 2019

Machine Vision and Analyzing Video Streams with IDA Moira Platform on a FlexPod Architecture


Among the many possible applications of artificial intelligence, an impressive use case stands out: the visual recognition and analysis of materials and products in video streams and images. This process is the starting point for countless application possibilities in a wide variety of industries. The underlying principle is that pure analytics gains knowledge, however, but it is the artificial intelligence, that puts this knowledge to use. Similar to a human being, a machine must first build up a certain experience – i.e. knowledge – through countless experiments. This know-how is required to reliably identify different objects and states via video analytics.

Practice makes perfect!


In order to familiarize an AI with a large variety of objects, one of our analytics platform partners, Intelligent Data System GmbH from Frankfurt, has set up a neural training network on the NetApp/Cisco FlexPod architecture with the help of its complex event processing platform MOIRA.

Training an AI-supported video recognition can be challenging: machine learning is indispensable for the classification of objects and it goes hand in hand with large memory-intensive data models. In addition, machine learning models must be trained specifically for the respective use cases and require a sufficient amount of learning material. Due to this, it is often difficult to find successful models. However, if this undertaking is successful, a wide variety of objects can be identified.

In practice, this results in a perfect, flexible blueprint for a variety of different video analysis use cases. It can be used, for example, in quality assurance, in logistics for the recognition of goods, in video surveillance or for an automatic inventory solution via image recognition – to name just a few areas.

FlexPod: the perfect architecture for video analytics requirements


AI-based projects are only as successful as their trained “experience”. This requires IT architectures that are extremely fast and facilitate this ongoing “training camp”. In order to give neural networks the ideal training incentives, the training device must be as flexible and configurable as possible. Convergent systems like the FlexPod are ideally suited for both the local and the hybrid cloud approach.

NetApp and Cisco developed FlexPod with the goal of achieving maximum flexibility for exactly this type of application. It consists of high-performance, intelligent “cloud connected” NetApp flash memory, Cisco Systems UCS (Unified Computing System) with Nvidia GPUs and highly secure network components (Nexus switches). This power plant stores, manages, and secures the large video streams generated by the high-resolution cameras in NetApp data management in a space-saving manner. The data is then trained and analyzed against the models with the Nvidia V100 GPUs.

Benefits of video analytics with NetApp FlexPod:


  • Native hybrid cloud integration for storage, analytics, and archiving
  • Extremely fast Flash/ NVME and Nvidia architectures
  • Native protocol support for connectivity to a wide range of data repositories
  • High automation and scalability from small to large
  • Data persistence and availability important for container and Kubernetes concepts

Pass Your NetApp Certification Exams In First Attempt



Thursday, January 31, 2019

Creating a Unified Content Platform in the Cloud for Broadcasting - NetApp Certifications


With 120,000 hours of programming each year, comprising more than 1 billion monthly online video views and 14 free and pay TV channels in 45 million households, German broadcaster ProSiebenSat.1 has significant data requirements. And with ultrahigh-definition (UHD) content becoming more commonplace, this company’s data is increasing at a rate of 100TB per month.

The huge task of storing such vast amounts of data was just one problem for ProSiebenSat.1. Silos were another problem. ProSiebenSat.1’s data requirements for its 24-hour scheduled broadcast channel were handled by storage platforms that were completely separate from the data needs of the company’s growing online media presence. ProSiebenSat.1’s data center was physically running out of room for legacy digital tape archives, and it needed to find a new, unified way forward.

ProSiebenSat.1 found its answer in NetApp® technology. By investing in NetApp solutions, this broadcasting giant found a way to meet its current needs while also future-proofing its storage system.

Developing a Unified Content Platform in the Cloud


By moving to a private cloud storage model, ProSiebenSat.1 migrated its entire 12PB archive into a much smaller footprint on a unified content platform. This model also enabled native data replication to a second site, increasing the resilience of ProSiebenSat.1’s media repository and improving the business continuity profile of the overall enterprise.

This move to a private cloud also helps the broadcasting company continue to grow and to innovate. The transition opened a pathway to faster application development for the processing and distribution of content data across the ProSiebenSat.1-owned networks and business partner delivery outlets.

Unification of storage and DevOps is a key component that greatly enhances system performance. ProSiebenSat.1 uses the NetApp StorageGRID® object-based storage solution to house the unified content platform, which includes current media content and programming that dates back 20 years. StorageGRID also stores all the application and development code backups from the NetApp SolidFire® DevOps environment. ProSiebenSat.1’s 100 in-house developers are building applications and are hosting them in Kubernetes on the SolidFire all-flash array.

Streamlined Workflows


All workflows for the various networks, processing, and distribution to the company’s 100-plus broadcast and social media partners and on-demand video service use StorageGRID as the common content repository source. With this approach, all the business units’ media processing can be supported by a common set of tools. ProSiebenSat.1 is now working with several off-the-shelf applications that support the S3 object protocol, including Cantemo Portal, Vidispine, lnterra Systems BATON, Capella Systems Cambria, and IBM Archive and Essence Manager (AREMA).

Our experts say about NetApp Certification Exams



Sunday, January 20, 2019

Why Your Network Is Continuously Tested to Destruction - NetApp Certifications


TCP, the ubiquitous IP transport protocol used for virtually all data exchanges in a storage context is designed to eternally probe for higher bandwidth. Eventually, between the vast number of hosts all trying to deliver data ever more quickly, and the network you operate, something has to give.

With todays deployed networking gear which is hardly ever configured any different than the factory default settings in this regard, when this happens, packet loss is the dire consequence. This is the only option available to your switches, routers and WLAN access points, to shed some load and get themselves a (very) short break.

However, not every packet lost is terminated equally. From the viewpoint of the switch, any packet loss (euphemistically also called drop or discard, with the relevant counters often hidden from plain sight) is just taking away a very tiny fraction of possible bandwidth – and congestion happens when the load on a link is constantly right there at 100%, correct?

But not so fast, this simplistic view is missing the bigger picture. As hinted above, the prevailing protocol nowadays to connect any networked device with each other is TCP. And TCP not only delivers data in-order and reliably (but not necessarily timely), it has also co-evolved over the last 30 years to deal with the harsh realities of packet networks.

In today’s age, packet loss virtually always is a sign of network congestion; even the low-layer WiFi links have sophisticated mechanisms, to try very hard and get a packet delivered. Even there, packets typically get discarded not while “on the air”, but when waiting for some earlier packet to get properly delivered, e.g. At a much lower transmission rate. Again, the incoming data has to queue up, and eventually the queue is full – packet loss and congestion caught in the act.

But think of your datacenter, where you have only two servers in need of writing data to your NetApp storage system of choice. As soon as these two hosts each transmit at 501 MB/s towards a 10G LIF in the very same microsecond, the switch capabilities, and link bandwidth towards the storage are overloaded, and at least some data has to queue up again.

So, isn’t more queueing buffer in all participating devices the solution? At least there are fashions in network technology – some time ago, deep buffered switches were all the rage; then you had shallow buffered switches. Nowadays, these appear to be no significant talking points any more – the fashion train has move to different marketing statements (while switches still come in shallow and deep buffered varieties).

But again, this simplistic view – more buffers mitigate packet losses, and all is good – misses the bigger picture.

A short primer on how TCP works, in very broad strokes: Unless TCP understands, that there may be an issue with the bandwidth towards the other end host, it continues to increase the sending bandwidth – always. And when your network device buffers more data, during the entire time, the sender only increases the sending rate, thus filling up the buffer ever more quickly. Until, that is, an indication of network overload (yes, this is an allusion to packet loss) arrives back at the sender. But typically, the loss happens on enqueue – that is, for the freshest packet that happens to arrive when the queue is filled up. And the sender will only know about this having happened, *after* all previous packets in the queue have been delivered to the client. But with a huge queue, it takes more time until the receive knows about the lost packet, and only then it can inform the sender. Which just kept on increasing the sending rate until now….

In summary, let me conclude with the following observations: It’s a false goal to try to avoid packet loss at all costs (deep buffered switches, priority or link layer flow control) when you are running TCP. TCP will just try to go even faster, inducing unnecessary buffering delays. Instead, do away with the legacy drop-tail queueing disciple that is factory default everywhere and interacts poorly with latency sensitive, but reliable data transfers as we have in storage. Moving to an AQM like RED / WRED (random-detect) is also the first step towards enabling truly lossless networks with today’s technology. But more about Explicit Congestion Notification (enabled by default on more hosts in your environment that you are aware of) in a later installment.

Our experts say about NetApp Certification Exams



Tuesday, January 8, 2019

NetApp CSO 2019 Perspectives - NetApp Certifications


As we enter 2019, what stands out is how trends in business and technology are connected by common themes. For example, AI is at the heart of trends in development, data management, and delivery of applications and services at the edge, core, and cloud. Also essential are containerization as a critical enabling technology and the increasing intelligence of IoT devices at the edge. Navigating the tempests of transformation are developers, whose requirements are driving the rapid creation of new paradigms and technologies that they must then master in pursuit of long-term competitive advantage.

1) AI projects must prove themselves first in the clouds


Still at an early stage of development, AI technologies will see action in an explosion of new projects, the majority of which will begin in public clouds.

A rapidly growing body of AI software and service tools – mostly in the cloud – will make early AI development, experimentation and testing easier and easier. This will enable AI applications to deliver high performance and scalability, both on and off premises, and support multiple data access protocols and varied new data formats. Accordingly, the infrastructure supporting AI workloads will also have to be fast, resilient and automated and it must support the movement of workloads within and among multiple clouds and on and off premises. As AI becomes the next battleground for infrastructure vendors, most new development will use the cloud as a proving ground.

2) IoT: Don’t phone home. Figure it out.


Edge devices will get smarter and more capable of making processing and application decisions in real time.

Traditional Internet of Things (IoT) devices have been built around an inherent “phone home” paradigm: collect data, send it for processing, wait for instructions. But even with the advent of 5G networks, real-time decisions can’t wait for data to make the round trip to a cloud or data center and back, plus the rate of data growth is increasing. As a result, data processing will have to happen close to the consumer and this will intensify the demand for more data processing capabilities at the edge. IoT devices and applications – with built-in services such as data analysis and data reduction – will get better, faster and smarter about deciding what data requires immediate action, what data gets sent home to the core or to the cloud, and even what data can be discarded.

3) Automagically, please


The demand for highly simplified IT services will drive continued abstraction of IT resources and the commoditization of data services.

Remember when car ads began boasting that your first tune up would be at 100,000 miles? (Well, it eventually became sort of true.) Point is, hardly anyone’s spending weekends changing their own oil or spark plugs or adjusting timing belts anymore. You turn on the car, it runs. You don’t have to think about it until you get a message saying something needs attention. Pretty simple. The same expectations are developing for IT infrastructure, starting with storage and data management: developers and practitioners don’t want to think about it, they just want it to work. “Automagically,” please. Especially with containerization and “server-less” technologies, the trend toward abstraction of individual systems and services will drive IT architects to design for data and data processing and to build hybrid, multi-cloud data fabrics rather than just data centers. With the application of predictive technologies and diagnostics, decision makers will rely more and more on extremely robust yet “invisible” data services that deliver data when and where it’s needed, wherever it lives. These new capabilities will also automate the brokerage of infrastructure services as dynamic commodities and the shuttling of containers and workloads to and from the most efficient service provider solutions for the job.

4) Building for multi-cloud will be a choice


Hybrid, multi-cloud will be the default IT architecture for most larger organizations while others will choose the simplicity and consistency of a single cloud provider.

Containers will make workloads extremely portable. But data itself can be far less portable than compute and application resources and that affects the portability of runtime environments. Even if you solve for data gravity, data consistency, data protection, data security and all that, you can still face the problem of platform lock-in and cloud provider-specific services that you’re writing against, which are not portable across clouds at all. As a result, smaller organizations will either develop in-house capabilities as an alternative to cloud service providers, or they’ll choose the simplicity, optimization and hands-off management that come from buying into a single cloud provider. And you can count on service providers to develop new differentiators to reward those who choose lock-in. On the other hand, larger organizations will demand the flexibility, neutrality and cost-effectiveness of being able to move applications between clouds. They’ll leverage containers and data fabrics to break lock-in, to ensure total portability, and to control their own destiny. Whatever path they choose, organizations of all sizes will need to develop policies and practices to get the most out of their choice.

5) The container promise: really cool new stuff


Container-based cloud orchestration will enable true hybrid cloud application development.

Containers promise, among other things, freedom from vendor lock-in. While containerization technologies like Docker will continue to have relevance, the de facto standard for multi-cloud application development (at the risk of stating the obvious) will be Kubernetes. But here’s the cool stuff… New container-based cloud orchestration technologies will enable true hybrid cloud application development, which means new development will produce applications for both public and on-premises use cases: no more porting applications back and forth. This will make it easier and easier to move workloads to where data is being generated rather than what has traditionally been the other way around.

Success Secrets: How you can Pass NetApp Certification Exams in first attempt