ingest

Results 1 - 25 of 32Sort Results By: Published Date | Title | Company Name
Published By: Pure Storage     Published Date: Jul 03, 2019
For thousands of organizations, Splunk® has become mission-critical. But it’s still a very demanding workload. Pure Storage solutions dramatically improve Splunk Enterprise deployments by accelerating data ingest, indexing, search, and reporting capabilities – giving businesses the speed and intelligence to make faster, more informed decisions.
Tags : 
    
Pure Storage
Published By: Genesys     Published Date: Feb 08, 2019
Angesichts der zahlreichen Optionen, die der äußerst dynamische Markt für cloudbasierte Contact Center bietet, ist die Suche nach der richtigen Lösung für Ihr Unternehmen eine Herausforderung. Vergleichen Sie mithilfe der „Ovum-Entscheidungsmatrix für die Auswahl eines cloudbasierten Multichannel-Contact-Centers, 2017–18“ ganz einfach die führenden Anbieter von Contact-Center-Lösungen anhand ihrer Fähigkeit, umfassende Funktionen für das Routing von Anrufen und kanalübergreifenden Kundenservice in der Cloud bereitzustellen. Außerdem erfahren Sie, warum Genesys als „Leader“ eingestuft wurde, der Lösungen für Unternehmen jeder Größe und in allen Branchen weltweit bietet. In der Ovum-Entscheidungsmatrix finden Sie: eine Gegenüberstellung cloudbasierter Contact-Center-Lösungen basierend auf einer Bewertung ihrer Technologieplattformen einen Vergleich der Anbieter anhand ihrer Fähigkeit, kanalübergreifende Kundeninteraktionen zu unterstützen und mittels Analysen Verbindungen zwisch
Tags : 
    
Genesys
Published By: Attunity     Published Date: Jan 14, 2019
This whitepaper explores how to automate your data lake pipeline to address common challenges including how to prevent data lakes from devolving into useless data swamps and how to deliver analytics-ready data via automation. Read Increase Data Lake ROI with Streaming Data Pipelines to learn about: • Common data lake origins and challenges including integrating diverse data from multiple data source platforms, including lakes on premises and in the cloud. • Delivering real-time integration, with change data capture (CDC) technology that integrates live transactions with the data lake. • Rethinking the data lake with multi-stage methodology, continuous data ingestion and merging processes that assemble a historical data store. • Leveraging a scalable and autonomous streaming data pipeline to deliver analytics-ready data sets for better business insights. Read this Attunity whitepaper now to get ahead on your data lake strategy in 2019.
Tags : 
data lake, data pipeline, change data capture, data swamp, hybrid data integration, data ingestion, streaming data, real-time data, big data, hadoop, agile analytics, cloud data lake, cloud data warehouse, data lake ingestion, data ingestion
    
Attunity
Published By: BMC ASEAN     Published Date: Dec 18, 2018
Big data projects often entail moving data between multiple cloud and legacy on-premise environments. A typical scenario involves moving data from a cloud-based source to a cloud-based normalization application, to an on-premise system for consolidation with other data, and then through various cloud and on-premise applications that analyze the data. Processing and analysis turn the disparate data into business insights delivered though dashboards, reports, and data warehouses - often using cloud-based apps. The workflows that take data from ingestion to delivery are highly complex and have numerous dependencies along the way. Speed, reliability, and scalability are crucial. So, although data scientists and engineers may do things manually during proof of concept, manual processes don't scale.
Tags : 
    
BMC ASEAN
Published By: Attunity     Published Date: Nov 15, 2018
With the opportunity to leverage new analytic systems for Big Data and Cloud, companies are looking for ways to deliver live SAP data to platforms such as Hadoop, Kafka, and the Cloud in real-time. However, making live production SAP data seamlessly available wherever needed across diverse platforms and hybrid environments often proves a challenge. Download this paper to learn how Attunity Replicate’s simple, real-time data replication and ingest solution can empower your team to meet fast-changing business requirements in an agile fashion. Our universal SAP data availability solution for analytics supports decisions to improve operations, optimize customer service, and enable companies to compete more effectively.
Tags : 
    
Attunity
Published By: Attunity     Published Date: Nov 15, 2018
IT departments today face serious data integration hurdles when adopting and managing a Hadoop-based data lake. Many lack the ETL and Hadoop coding skills required to replicate data across these large environments. In this whitepaper, learn how you can provide automated Data Lake pipelines that accelerate and streamline your data lake ingestion efforts, enabling IT to deliver more data, ready for agile analytics, to the business.
Tags : 
    
Attunity
Published By: Talend     Published Date: Nov 02, 2018
Siloed data sources, duplicate entries, data breach risk—how can you scale data quality for ingestion and transformation at big data volumes? Data and analytics capabilities are firmly at the top of CEOs’ investment priorities. Whether you need to make the case for data quality to your c-level or you are responsible for implementing it, the Definitive Guide to Data Quality can help. Download the Definitive Guide to learn how to: Stop bad data before it enters your system Create systems and workflow to manage clean data ingestion and transformation at scale Make the case for the right data quality tools for business insight
Tags : 
    
Talend
Published By: StreamSets     Published Date: Sep 24, 2018
Treat data movement as a continuous, ever-changing operation and actively manage its performance. Before big data and fast data, the challenge of data movement was simple: move fields from fairly static databases to an appropriate home in a data warehouse, or move data between databases and apps in a standardized fashion. The process resembled a factory assembly line.
Tags : 
practices, modern, data, performance
    
StreamSets
Published By: SAS     Published Date: Aug 28, 2018
When designed well, a data lake is an effective data-driven design pattern for capturing a wide range of data types, both old and new, at large scale. By definition, a data lake is optimized for the quick ingestion of raw, detailed source data plus on-the-fly processing of such data for exploration, analytics and operations. Even so, traditional, latent data practices are possible, too. Organizations are adopting the data lake design pattern (whether on Hadoop or a relational database) because lakes provision the kind of raw data that users need for data exploration and discovery-oriented forms of advanced analytics. A data lake can also be a consolidation point for both new and traditional data, thereby enabling analytics correlations across all data. To help users prepare, this TDWI Best Practices Report defines data lake types, then discusses their emerging best practices, enabling technologies and real-world applications. The report’s survey quantifies user trends and readiness f
Tags : 
    
SAS
Published By: AWS     Published Date: Aug 20, 2018
A modern data warehouse is designed to support rapid data growth and interactive analytics over a variety of relational, non-relational, and streaming data types leveraging a single, easy-to-use interface. It provides a common architectural platform for leveraging new big data technologies to existing data warehouse methods, thereby enabling organizations to derive deeper business insights. Key elements of a modern data warehouse: • Data ingestion: take advantage of relational, non-relational, and streaming data sources • Federated querying: ability to run a query across heterogeneous sources of data • Data consumption: support numerous types of analysis - ad-hoc exploration, predefined reporting/dashboards, predictive and advanced analytics
Tags : 
    
AWS
Published By: Amazon Web Services     Published Date: Jul 25, 2018
Qu'est-ce qu'un lac de données ? Les organisations doivent gérer de plus grands volumes de données, provenant de davantage de sources et contenant plus de types de données que jamais auparavant. Confrontées à des volumes massifs et hétérogènes de données, de nombreuses organisations ont compris que pour proposer des informations stratégiques pour l'entreprise en temps opportun, elles doivent disposer d'une solution d'analyse et de stockage des données qui offre plus de vitesse et de souplesse que les systèmes traditionnels. Un lac de données représente une solution nouvelle et de plus en plus répandue pour stocker et analyser les données. Cette solution relève la plupart de ces défis en permettant à une organisation de stocker l'ensemble de ses données au sein d'un même référentiel centralisé. Comme les données peuvent être stockées sous leur forme initiale, il n'est pas nécessaire de les convertir en un schéma prédéfini avant leur ingestion.
Tags : 
    
Amazon Web Services
Published By: AWS     Published Date: Jun 20, 2018
Data and analytics have become an indispensable part of gaining and keeping a competitive edge. But many legacy data warehouses introduce a new challenge for organizations trying to manage large data sets: only a fraction of their data is ever made available for analysis. We call this the “dark data” problem: companies know there is value in the data they collected, but their existing data warehouse is too complex, too slow, and just too expensive to use. A modern data warehouse is designed to support rapid data growth and interactive analytics over a variety of relational, non-relational, and streaming data types leveraging a single, easy-to-use interface. It provides a common architectural platform for leveraging new big data technologies to existing data warehouse methods, thereby enabling organizations to derive deeper business insights. Key elements of a modern data warehouse: • Data ingestion: take advantage of relational, non-relational, and streaming data sources • Federated q
Tags : 
    
AWS
Published By: AWS     Published Date: May 18, 2018
We’ve become a world of instant information. We carry mobile devices that answer questions in seconds and we track our morning runs from screens on our wrists. News spreads immediately across our social feeds, and traffic alerts direct us away from road closures. As consumers, we have come to expect answers now, in real time. Until recently, businesses that were seeking information about their customers, products, or applications, in real time, were challenged to do so. Streaming data, such as website clickstreams, application logs, and IoT device telemetry, could be ingested but not analyzed in real time for any kind of immediate action. For years, analytics were understood to be a snapshot of the past, but never a window into the present. Reports could show us yesterday’s sales figures, but not what customers are buying right now. Then, along came the cloud. With the emergence of cloud computing, and new technologies leveraging its inherent scalability and agility, streaming data
Tags : 
    
AWS
Published By: AWS     Published Date: Apr 27, 2018
Until recently, businesses that were seeking information about their customers, products, or applications, in real time, were challenged to do so. Streaming data, such as website clickstreams, application logs, and IoT device telemetry, could be ingested but not analyzed in real time for any kind of immediate action. For years, analytics were understood to be a snapshot of the past, but never a window into the present. Reports could show us yesterday’s sales figures, but not what customers are buying right now. Then, along came the cloud. With the emergence of cloud computing, and new technologies leveraging its inherent scalability and agility, streaming data can now be processed in memory, and more significantly, analyzed as it arrives, in real time. Millions to hundreds of millions of events (such as video streams or application alerts) can be collected and analyzed per hour to deliver insights that can be acted upon in an instant. From financial services to manufacturing, this rev
Tags : 
    
AWS
Published By: Pure Storage     Published Date: Apr 18, 2018
In today’s world, it’s critical to have infrastructure that supports both massive data ingest and rapid analytics evolution. At Pure Storage, we built the ultimate data hub for AI, engineered to accelerate every stage of the data pipeline. Download this infographic for more information.
Tags : 
    
Pure Storage
Published By: Hitachi Vantara     Published Date: Mar 20, 2018
ESG Lab performed hands-on evaluation and testing of the Hitachi Content Platform portfolio, consisting of Hitachi Content Platform (HCP), Hitachi Content Platform Anywhere (HCP Anywhere) online file sharing, Hitachi Data Ingestor (HDI), and Hitachi Content Intelligence (HCI) data aggregation and analysis. Testing focused on integration of the platforms, global access to content, public and private cloud tiering, data quality and analysis, and the ease of deployment and management of the solution.
Tags : 
    
Hitachi Vantara
Published By: SAS     Published Date: Mar 06, 2018
When designed well, a data lake is an effective data-driven design pattern for capturing a wide range of data types, both old and new, at large scale. By definition, a data lake is optimized for the quick ingestion of raw, detailed source data plus on-the-fly processing of such data for exploration, analytics, and operations. Even so, traditional, latent data practices are possible, too. Organizations are adopting the data lake design pattern (whether on Hadoop or a relational database) because lakes provision the kind of raw data that users need for data exploration and discovery-oriented forms of advanced analytics. A data lake can also be a consolidation point for both new and traditional data, thereby enabling analytics correlations across all data. With the right end-user tools, a data lake can enable the self-service data practices that both technical and business users need. These practices wring business value from big data, other new data sources, and burgeoning enterprise da
Tags : 
    
SAS
Published By: Snowflake     Published Date: Jan 25, 2018
To thrive in today’s world of data, knowing how to manage and derive value from of semi-structured data like JSON is crucial to delivering valuable insight to your organization. One of the key differentiators in Snowflake is the ability to natively ingest semi-structured data such as JSON, store it efficiently and then access it quickly using simple extensions to standard SQL. This eBook will give you a modern approach to produce analytics from JSON data using SQL, easily and affordably.
Tags : 
    
Snowflake
Published By: MemSQL     Published Date: Nov 15, 2017
Pairing Apache Kafka with a Real-Time Database Learn how to: ? Scope data pipelines all the way from ingest to applications and analytics ? Build data pipelines using a new SQL command: CREATE PIPELINE ? Achieve exactly-once semantics with native pipelines ? Overcome top challenges of real-time data management
Tags : 
digital transformation, applications, data, pipelines, management
    
MemSQL
Published By: SAS     Published Date: Oct 18, 2017
When designed well, a data lake is an effective data-driven design pattern for capturing a wide range of data types, both old and new, at large scale. By definition, a data lake is optimized for the quick ingestion of raw, detailed source data plus on-the-fly processing of such data for exploration, analytics and operations. Even so, traditional, latent data practices are possible, too. Organizations are adopting the data lake design pattern (whether on Hadoop or a relational database) because lakes provision the kind of raw data that users need for data exploration and discovery-oriented forms of advanced analytics. A data lake can also be a consolidation point for both new and traditional data, thereby enabling analytics correlations across all data. To help users prepare, this TDWI Best Practices Report defines data lake types, then discusses their emerging best practices, enabling technologies and real-world applications. The report’s survey quantifies user trends and readiness f
Tags : 
    
SAS
Published By: IBM     Published Date: Apr 18, 2017
The data integration tool market was worth approximately $2.8 billion in constant currency at the end of 2015, an increase of 10.5% from the end of 2014. The discipline of data integration comprises the practices, architectural techniques and tools that ingest, transform, combine and provision data across the spectrum of information types in the enterprise and beyond — to meet the data consumption requirements of all applications and business processes. The biggest changes in the market from 2015 are the increased demand for data virtualization, the growing use of data integration tools to combine "data lakes" with existing integration solutions, and the overall expectation that data integration will become cloud- and on-premises-agnostic.
Tags : 
data integration, data security, data optimization, data virtualization, database security, data analytics, data innovation
    
IBM
Published By: IBM     Published Date: Apr 14, 2017
Any organization wishing to process big data from newly identified data sources, needs to first determine the characteristics of the data and then define the requirements that need to be met to be able to ingest, profile, clean,transform and integrate this data to ready it for analysis. Having done that, it may well be the case that existing tools may not cater for the data variety, data volume and data velocity that these new data sources bring. If this occurs then clearly new technology will need to be considered to meet the needs of the business going forward.
Tags : 
data integration, big data, data sources, business needs, technological advancements, scaling data
    
IBM
Published By: IBM     Published Date: Jan 20, 2017
Government agencies are taking advantage of new capabilities like mobile and cloud to deliver better services to its citizens. Many agencies are going paperless, streamlining how they interact with citizens and providing services more efficiently and faster. This short video will show real examples of how government agencies are applying new capabilities like cognitive and analytics to improve how they ingest, manage, store and interact with content.
Tags : 
    
IBM
Published By: IBM     Published Date: Jan 20, 2017
Government agencies are taking advantage of new capabilities like mobile and cloud to deliver better services to its citizens. Many agencies are going paperless, streamlining how they interact with citizens and providing services more efficiently and faster. This short video will show real examples of how government agencies are applying new capabilities like cognitive and analytics to improve how they ingest, manage, store and interact with content.
Tags : 
ibm, ecm, analytics, smarter content, ecm for government
    
IBM
Published By: IBM     Published Date: Aug 04, 2016
IBM BigInsights is ready to help Quest Diagnostics Inc. ingest, normalize and analyze huge data-sets, delivering new insight into clinical outcomes for physicians, hospitals, and millions of patients.
Tags : 
ibm, analytics, myaa, quest diagnostics, case study, data analytics, data insight, big data
    
IBM
Previous   1 2    Next    
Search      

Add Research

Get your company's research in the hands of targeted business professionals.