Data Management with Diver Platform®
Data Management, Governance, and Analytics Simplified
Because of its unique design, Diver Platform® outperforms other data management, governance, and analytics software products. Diver® can work with your existing data warehouse and integrate data from any number of disparate sources. Users can compare data collected from transaction systems with information in the data warehouse and legacy data sources or spreadsheets and flat files.
Diver’s® unique use of in-memory technology explains why Diver® users experience consistently fast response times, regardless of underlying big data volumes.
Learn more about Diver Platform’s® data management process.
- Predefined drill paths are not necessary with Diver®, drill down to additional detail data from any dashboard indicator
- Identify, define, and develop metrics that meet project, departmental, or organizational information requirements.
- Quickly develop and deploy dashboards appropriate to each user’s job role and information requirements
- Download dashboard metrics, charts, and data to MS Excel®, PowerPoint®, or Adobe® PDF documents
Data Management and Integration
The extract, transform, and load (ETL) tool provides quick and easy access to a multitude of data sources:
- Transactional databases
- Flat files
- ODBC-compliant databases
- Microsoft® Excel® spreadsheets
- Wide range of proprietary data formats such as ERP, EHR, and operational systems
- Apply rules and measures with Diver’s® Measure Factory®
Powerful Data Analysis Software
- No SQL queries or scripting required to explore and analyze your data
- Multiple clients: Web-based, desktop, tablet
- Search, filter, sort, group, and export data from any client
- Dozens of chart types and options help you uncover hidden trends and patterns
- OpenStreetMap, Thunderforest, and Stamen™ mapping supported
- Rich set of built-in functions including string, math, statistical, and logical
Data Governance and Security
- Flexible authentication options: Own, System, LDAP, and Web Server
- Data authorization via assigned properties and access control rules
- Multiple security levels and robust data encryption
- Access control at the data model or field level
“Being able to display information that instantly leads the user to a conclusion is very powerful.”
Jim Staton, Vice President, Information Technology/Mutual Distributing, beverage alcohol industry
Big Data Processing Engine
Dimensional Insight’s Diver Platform® completely re-engineered columnar database technology to give the greatest increase in speed and efficiency. Diver’s® data processing engine column-oriented, shareable database storage format is optimized for query-time calculations instead of build-time calculations. The design takes advantage of hardware innovations and analysis practices to better handle user behaviors and queries.
Columnar database design
Diver’s® data processing engine uses an in-memory, binary-format columnar database. So, just what is columnar database technology and what makes it so fast?
Typically, a relational database stores fields consecutively in a record, like rows in a table. This is a great design when you want to retrieve all the fields of a record every time the record is accessed. However, business intelligence queries typically need to access only one or a few fields of each record. For these queries, the row-oriented design is not very efficient.
In the self-indexing columnar database design, instead of storing all of the fields for each record together, the records are broken up. The “like” fields for all records, or each column of a table, are stored together in blocks of memory. Now, when you want to perform calculations on the data, such as a SUM, MAX, MIN, COUNT, or AVG, only the relevant columns are accessed, making calculations very fast.
Diver’s® data processing engine design is robust enough for challenging enterprise-level business intelligence analysis and delivers fast performance without taxing resources.
- Database size
Diver’s data processing engine does not maintain separate database indexes so the on-disk size of the columnar cBase is small relative to the data input. It handles large data volumes in a single cBase without limits on file size, column count, or number of dimensions, minimizing maintenance tasks.
Cached dives deliver fast response while avoiding stale results. As an in-memory data engine, Diver’s® engine answers most queries without needing to access the hard drive, caches the results for reuse, and eliminates costly disk accesses.
When Diver’s® engine loads parts of a cBase into memory, it can share the cBase across multiple engine processes. Multiple dives running at the same time for users with different access share this memory, and the engine processes ensure that each user gets the right results without compromising security.
- Low per-user overhead
Diver’s® data engine delivers fast user performance with a low idle-connection cost per user to support more simultaneous users without a linear increase in memory and processor usage. User connections are closed when the operation completes, and resources are not devoted to idle user sessions.
Diver’s® data engine is built for speed, both for calculations and for builds, significantly boosting run-time performance for clients and productivity of IT staff.
- Run-time performance
Diver’s® data engine algorithms optimize run-time performance for some of the most commonly used computations. Diver’s® calculation engine compiles formulas into machine code optimized for the processor on which it is running and runs the machine code runs raw. These design optimizations shave off computation processing time making run-time performance extremely fast.
- Build times
When your data input amounts to 500 million – 1 billion rows or more, you need builds to turn around quickly. Diver’s® data engine does not pre-summarize data, so the engine build times are small relative to the data input, which all adds up to getting current data available to users quicker and more frequently
Users need rapid information access and IT needs to make sure it can manage and support user requirements. Diver Platform® data engine does both with Diver’s® Workbench® for developers and DiveTab® to keep your mobile workforce connected on the go.
Dimensional Insight’s global team of business intelligence consultants assist with the design, implementation, and customization of your application. Consulting service plans offer the flexibility to deliver complete turnkey solutions or remote support for your internal IT team or any level of service in between, putting you in control of your application.
- Consolidate data sources
Enterprise-level businesses are seeing an increase in the quantity and diversity of data in varied formats from different platforms and the increased need to combine views. Spectre takes advantage of the latest hardware advances, such as faster core speeds and multiple cores for built-in parallel processing, large amounts of memory, solid state disk (SSD), and advanced compiler technology, to radically boost performance in these environments.
Diver’s® Workbench®, an integrated development environment (IDE), helps developers manage the entire back-end process, from data source to portal. Diver’s® data engine configuration and scripts use a single text-based scripting language, which developers access and edit with the robust Workbench® editor. The scripting language is simple and powerful for builds and dives. Workbench® speeds development with highlights for important parts of the script, code suggestions, and descriptive help.
- DiveTab® client
Powered by Diver’s® data engine, the DiveTab® client is a tablet-based mobile technology for self-service reporting and analysis that drives data-driven decision making and information delivery using dashboards. DiveTab® uses the speed of Diver’s® data engine for rapid and secure access to your data and other resources, such as presentations and documents, from a central location.
What is “data management”?
Here is BusinessDictionary.com’s definition of “data management”:
Administrative process by which the required data is acquired, validated, stored, protected, and processed, and by which its accessibility, reliability, and timeliness is ensured to satisfy the needs of the data users.