Maximum Performance, Total Flexibility, and Unlimited Access on your own AWS EC2 Instance:
* For SQL databases use the appropriate connection string (see docs).
Frequently asked Questions (FAQs)
How does Generative AI enhance data matching and cleansing in this product?
We use Generative AI to dramatically enhance our data matching and "inconsistently represented data"
identification capabilities. We've built a specialized language model into the product, transparent
to the user, that enables these advanced capabilities. It is used by our matching algorithms during
the analysis and processing of data.
What are the requirements to deploy the Interzoid Matching Amazon Machine Image (AMI) on an AWS EC2 instance?
You only need an existing Amazon AWS account. Within your account, from the AWS Marketplace you simply launch
the Machine Image to an EC2 instance, and you're off and running.
Can I deploy Interzoid's AI-Powered Data Matching Machine Image for EC2 to any AWS region?
Yes. There are no geographic restrictions.
Is there a commitment or long-term contract required to use this product?
No. It is billed inexpensively by the hour, so you can use it for only one hour to run data quality
anomaly and match reports against datasets if desired, and/or to test drive it with existing data or datasets.
Usage cost only occurs while the deployed EC2 instance is running.
How is billing handled for the use of this AWS Marketplace product?
The hours of usage are simply part of your normal, monthly AWS bill. There is no
separate billing.
What types of data sources can Interzoid's AI-Powered Data Matching process?
You can quickly connect to SQL-based database tables using our built-in native drivers to
begin analysis and matching, including AWS RDS, AWS Aurora, AWS Redshift, Snowflake, Databricks,
Azure SQL, Google SQL, Postgres, MySQL, and others. You can also analyze and process data from flat files,
including CSVs, TSVs, Parquet, and Microsoft Excel.
Can I analyze data that resides outside my Virtual Private Cloud (VPC)?
Yes. While you can choose to access and process data entirely behind your own firewalls, you can
also connect to databases and files that are either out in the Cloud, on-premise, or on your
own workstation or laptop for analysis, processing, and reporting.
Can I integrate this product into existing data pipelines or applications?
Yes, both the single-record and full-dataset APIs are JSON-based for compatibility. To access them
externally from pipelines, processes, applications, Web forms, or anything else, simply open the port using AWS
that the two APIs are running on, and they can be accessed from anywhere. You can change/configure the ports
of access as well if you prefer.
Are there any limitations on the volume of data that can be processed?
Other than theoretical limits of physics and infinity, there are none. Have at it. You can
use more powerful EC2 instances for massive data requirements if necessary as well.
What kind of performance can be expected?
Other than reading data from your databases/files and writing out results, all processing occurs
in-memory. In other words, it is lightning-fast. Using more advanced AWS hardware infrastructure
and more powerful EC2 machines can be leveraged to increase performance as well if desired.
Is technical support available for deploying and using the product?
Yes. We want you to be successful and will help you if you need it.