CX Journeys Spring 2020 Release Notes

In this product update you will find updates on:

  • Export Segment
  • Import Segment
  • Roles

Export Segment

The Export Segment enables exporting a set of users from CX Journeys platform to different destinations such as Google Bucket, S3, and SFTP (SSH).

Which formats are being supported?
The supported formats are CSV and JSON.

Who can export segments?
Each project user can export segments.

Which projects support import segment
At the moment, we provide the solution for new project architecture. To check whether your project supports the export please contact Professional Services.

So how can I export a segment?
Under the segments section, clicking on the 3 dots button (on the right corner for each segment) will open for you a drop down list, the 3rd option will be Export.

After clicking on export, the following screen appears:

  • Supported formats are CSV and JSON (with an option to Gzip the file)
  • In the export attributes option the users will have to pick the desired user attributes to be exported

Now you’re ready to move to the 2nd step!

In the 2nd step, the user will be able to choose one of the following options:

  • Google bucket
  • S3
  • SFTP (SSH)

Google Bucket

  • Destination URI – the location address of the bucket destination
  • Google bucket connection approach
    • CX Journeys Service Account Address – In the following approach users will
    • have to copy CX Journeys Address and paste it in the relevant bucket. In the bucket browser (in GCP) Under permissions tab, click add members, and pick a role for the permission. The minimum role needed is “Storage Legacy Bucket Owner”.
    • Another approach that can be used is creating a custom role (under roles section) with the following permissions:
      • storage.buckets.get
      • storage.buckets.getIamPolicy
      • storage.objects.delete
      • Storage.objects.create
    • After the role has been created, click add members, and pick the role created for the permission.
    • Service Account Key – In this approach, the user will have to create a service account in Google Cloud
      • Under IAM & Admin click Service Account, and then create service account
      • After completing step 1 and 2 (no need to add permission), In the last stage of the user will need to create the key in JSON format, and upload it to the wizard

S3

The following fields should be filled:

  • Destination URI – The location address of the bucket destination
  • Region (default is US East N. Virginia)
  • Access Key ID – The user identifier for the bucket destination
  • Secret Access Key – The user password for the bucket destination

SFTP (SSH)

This option is fully customizable by the user, depending on the location of the address. The following fields should be filled:

  • File Path – The required folder for the SFTP file
  • Host Name – The location address of the SFTP server
  • Port (usually 22)
  • Username
  • Password

Once the configurations have been set, the user will be able Test the connection, and/or move to the last step.

The user will have to save two last preferences regarding the process:

  • Scheduling/one time operation (manual)
  • Notification to email whether the import failed for any reason
  • Save/Save and run now

Import Segment

The Import Segment enables importing a set of users to the CX Journeys platform, using manual file upload or the project Google Bucket source.

Which formats are being supported?
The supported formats are CSV and JSON, however different extensions (json, jsn, csv, txt, gzip, zip & no extension) are being supported as well

Who can Import segments?
Each project user can import segments

How can I import my own segment?
Under the segments section, clicking on the + button (on the top right corner) will open you a drop down list, the 3rd option will be Import. User will be able to choose from two options

  • Import from Google Cloud Storage
  • Upload local file

Google Bucket

Choosing the Google cloud storage shows the following screen:

  • File: clicking on the browse button will allow the user to choose the desired file from the account source bucket. If you don’t have access to the source bucket please reach your account admin or professional services.
  • Creating new user option allows the user to import new users to the platform
  • Update users option allows the user to update the users existing attributes
  • Unify users allows the user to unify multiple users records to a single user, according to the identity source logic. (Recommended state is No)

Manual upload:

Manual upload approach holds the same functionality, the only difference is the size limit of 50MB.

After completing the 1st step, steps 2 are completely equal, between two options the user will reach to the mapping step.

On the left side the source file properties will be shown, while on the right side the tool will recommend the desired fields (properties) that will be mapped. In addition, for each property the platform will recommend the property expected type, which can be modified by the user (only for new properties).

For each property the user can choose the following options:

  • Add the property as a new property (whether there’s no existing property in the platform)
  • Ignore the property
  • Map the property to an existing property

Note: In the mapping stage the user has to map at least one identifier field (in the mapping fields)

Once the mapping have been set, the user will be able to reach the 3rd and final step.

The user will have need for last couple of preferences regarding the process:

  • Segment name
  • Scheduling/one time operation (manual)
  • Notification to email whether the import failed for any reason
  • Save/Save and run now

Roles

Roles are a group of users who hold the same permissions, either for a project or dashboard. Once you add a role to the dashboard (or project), all the users of that role immediately get the same permissions.

Who can manage Roles?
Roles are being managed by the Account Admin.

What is an Account?
Account is the new parent hierarchy for project management. Means that one account can hold several projects which can be easily navigated between (Settings -> Active Project).

As a result, “Account Admin” user level has been added. For each account, only account admins will be able to create new roles.

How can I create a new role? 
Click on User icon -> Roles -> + Button

Each role can be related to several projects and/or several dashboards (left side). On the right side the account admin can add all the relevant users that will be part of the role.

Can I share a dashboard to roles?
Sure, clicking on the dashboard share button will allow users to manage their share preferences for users and roles.

If I’m a project admin, can I provide a role permission?
Yes. A role permission to a project can be provided (or removed) via the permissions section (Settings-> Permissions -> Roles Tab).

Print Friendly, PDF & Email

Cooladata Winter 2019 Release Notes

In this product update you will find updates on:

  • Path analysis visualization (Sankey)
  • New enhanced date picker
  • Additional customized chart color palettes
  • Improved KPI report functionality: added Having and Sort by capabilities
  • Firebase support
  • Google Standard SQL support (We continue to support Legacy SQL)
  • Date condition for merging Aggregation and Models tables
  • UI / UX improvements
  • Shipment and cache notifications
  • Ability to download models logs

 

New / Improved Features:

  • Path analysis visualization (Sankey)
    We are aware that the best way to get insights is by visualizing the data, so we’ve looked for a better way to present the users journey: Meet our new Sankey visualization!
    It allows you to read the users actions in a simple and easy way and can help you maximize your conclusions about your users’ flow. It available in Path builder and CQL editor and the colors of the graph can be controlled through the visualization setting
  • New enhanced date picker
    We designed a new date picker that expands the querying abilities and improves the user experience.
    now you can query your data with more time functions including:
    – “Last N Days“: counting days back, not including the current day. In order to include the current day just set the “Include today” toggle ON.
    – “Previous N Weeks / Months / Quarters / Years“: Calculates calendar time period, that do not include the current ones.
    – “Current Week / Month / Quarter / Year“: calculates the current calendar time period.

    Custom date range by hours: adds the option to define start hour at the start date and end hour at the end date.
  • Additional customized chart color palettes
    We expanded our charts color palette and as from today, each report could be set with a different color palette to allow the best fit for your site’s design.
    In order to choose a new palette, open the report and change the color palette on the visualization setting.

  • Improved KPI report functionality: added Having and Sort by capabilities
    Cooladata added an advanced menu to the KPI report which enables you to add “HAVING” and “SORT BY” SQL function to the KPI query.
    The advanced menu will be shown at the bottom of the builder once the KPI report has at least one custom step and a breakdown.
    This ability allows you to filter aggregated reports  easily.

    For more information see our KPI documentation.
  • Firebase support
    Based on customer feedback and spotting an underserving in firebase analytics we released special features to help companies that rely on firebase as a development platform.

    1. JSON flattening at the ETL level for deserializing nested JSON and matching it to the known schema of firebase.  The result is the ability to query and not having to worry about unesting.
    2. Multiple project consolidation – companies running two projects or more enjoy consolidation for enhanced querying and comparison. This is also a significant cost reduction.
    3. Unlimited properties per event – 25 unique parameters with each event type. With Cooladata, there is no limit to the number of custom properties and dimensions per event.
    4. Enriching from Firebase real-time database – we managed to combine and manage update all properties and dimensions from Firebase into the fact table. We make it more efficient, producing insights by merging both raw and real-time data.
      For more information see our documentation.
  • Google Standard SQL support
    Up until now, Cooladata enabled running freehand queries over Cooladata using Standard SQL with CQL editors only. Cooladata now supports create a fully standard project, which automatically will support external tables with Standard dialect, support standard expressions and will allow querying the data with KPI report as part of a strategic plan to migrate Cooladata to operate in Standard SQL, encompassing all the advantages of the language, including performance enhancements, query optimizations and superior functionality.
    For more information about the new Standard SQL functions (like “Unnest”, “With”, “Array” etc.) see our Standard SQL documentation.
  • Date condition for merging Aggregation and Models tables
    Up until now, using “Append and Update strategy” allowed you to update existing rows (according to a unique key) and append new rows to the destination table. Starting from toady, Cooladata allows adding to partitions tables (Aggregation Tables or Models) with “Append and Update strategy” days filter (based on the partitions column) which enables to replace only the latest tables’ rows. This feature improves executions’ performance and helps to manage huge tables.
  • UI / UX improvements: Saving lists filters during the session and open in a new tab
    We’ve listened to our customers’ requests and continue to improve our user interface. As parts of our efforts, we enabled to keep the list’s filters until the end of the session. This includes the search, column filter, sorting, the active tab and the number of the presented results in the page (page size). In addition, we allow opening reports, dashboards and tasks in a new tab using CTRL+Click/middle mouse click.
  • Shipment and cache notifications
    In order to improve query performance, Cooladata uses extensively caching and shipment.
    Whenever a report is executed, Cooladata notify the user whether the data returned from the cache or used shipment 
    Controlling whether a report will use cache, as well as shipment, could have never been easier. Just click on the “Report Options” and set the toggle ON or OFF

    For more information see our report options documentation.
  • Ability to download models logs
    We added all the attached logs files to each run, available for download at the “View logs” window

 

Print Friendly, PDF & Email

Cooladata Fall 2018 Product Update 

In this product update you will find updates on:

New Features:

  • Standard SQL
  • Partitioning in External Tables
  • Writing Models to your external destination.
  • Merge data using Integrations

Feature Improvements:

  • Simplifying working with several projects in Cooladata
  • Cohort Auto Captions
  • Column filters for Table Visualization
  • ETL auto-population by scope ignoring NULL values
  • Lookup values for virtual properties and expressions

New features:

  • Standard SQL
    Up until now, Cooladata’s SQL was compatible with Google BigQuery SQL Legacy dialect. Cooladata now enables running freehand queries over Cooladata using Standard SQL, as part of a strategic plan to migrate Cooladata to operate solely in Standard SQL, encompassing all the advantages of the language, including performance enhancements, query optimizations and superior functionality. Writing Standard SQL is now available when creating a CQL report, aggregation table, model, CQL segment or alert. For more information see our Standard SQL documentation.
  • Partitioning in External Tables
     Cooladata’s events and sessions tables are automatically partitioned by time. From now on, Cooladata also enables the partitioning of other external tables in your project by a timestamp or date column. Partitioning makes sure you run only on the selected time range in the table instead of the entire table, improving performance and minimizing query run time. Partitioning is available only when creating aggregation tables or models using Standard SQL dialect, and are only available for querying in Standard SQL dialect.
  • Merge data using Integrations
    Integrations fetching external data can now write to a table in “Append and Update strategy”, which will update existing rows (according to a unique key) and append new rows to the destination table.  This can reduce the number of rows fetched and integrated in each run to only the updated and new rows.
  • Writing Models to your external destination
    Models can now save their results to external destinations as available in aggregation tables: your Google Bucket or BigQuery dataset.

Feature Improvements:

  • Simplifying working with several projects in Cooladata
    We’ve listened to our customers’ requests and enabled filtering all the dashboards and reports shown on the lists filtered by the active project. Since you can add reports from several projects to the same dashboard,  some dashboards might be shown in a few projects. When creating filters, make sure you choose the right project and table to filter by in the filter widget.
  • Chart Visualization – Choose whether or not to auto-fit your chart
    We are aware that sometimes, it’s the big picture showing big trends that is important in a chart, whereas in other times – it’s the little details we want our data consumers to focus on. That’s exactly why we have enabled auto-fitting a single chart in the report size – so you can choose how your report will be viewed in a dashboard.

  • Cohort Auto Captions
    The cohort widget now auto-populates the cohort captions that explain to your report consumers what it is they are seeing. You can override the captions manually as you wish.

  • ETL auto-population by scope ignoring NULL values
    Sending an explicit Null for a session or user scope property will be ignored and the last non-null value sent will be auto-populated (“smeared”) going forward.
  • Column filters for Table Visualization
    Filtering a single value has never been easier. Just click on a single column header and choose the value you want to see.

  • Lookup values for virtual properties and expressions
    Lookup values are the list of values in the filter drop down. These are populated as part of the ETL, so they weren’t available for virtual properties and expressions until now. From now on, all properties can have a drop down list of values in filters.
Print Friendly, PDF & Email

Take control of your Data – How Cooladata lets you Manipulate data in Multiple ways

As you know, Cooladata lets you control which data to collect from numerous sources. You decide what you want track and Cooladata warehouses it for all your analytical purposes.

Once you decided which sources you want to collect, cooladata normalizes the data and makes it very easy to execute queries no matter how complex they are, to summarize or aggregate data for reporting and serving to other people or other platforms. Since accuracy is essential, Cooladata manages the data at the raw event granularity.

Cooladata ensures that your data is validated against our self learning schema to ensure it is correctly formatted and processed successfully. On top of this validation, you can control how your data is processed, transformed and enriched before it is stored, or manipulate it after the storage is done for easier querying and reporting.

Recently, we’ve added two very useful features to the data manipulation “tool set” Cooladata offers their customers as part of every project:

  1. Custom Manipulations as part of the ETL
    This new feature allows you to customize manipulations on the data before it is stored and requires custom set-up with your Cooladata Customer Success Manager.

    – Blocking or invalidating events: Every data scientist knows: “garbage in – garbage out”. To avoid entering messy data to your project – you could block or invalidate events that match a certain condition as part of your custom project set-up. A common use case is blocking out bots or internal QA users.

    – Changing events before they are stored: manipulating events as part of the ETL can have several use case, such as: hashing personal information, changing data types, extracting specific parameters from long strings or arrays into designated columns, etc. The biggest advantage of this type of manipulation is that it’s done before the data is even stored so it doesn’t require any further manipulation and helps a lot with achieving consistency of data.

  2. Sending retroactive events back to the ETL
    Cooladata allows sending events to the ETL to be stored in the “cooladata” events table with the rest of your events. This is a very common case for events that can be uploaded from an external DB, historical data storage, the invalids table or even a service API. This task can be scheduled to run automatically like any other task in Cooladata and can even be set up as a step in a Job. Notice these events are out of context so the automatic sessionization done by our ETL might be affected. To avoid that you can turn off sessionization for these events.

These above mentioned  features are already deployed and are ready to be exploited. Just to serve as a reminder, here is the full list of data manipulation tools already available:

1. ETL – out of the box data manipulations:

  • Server – side Sessionization: unifies sessions from all platforms into one single session, based on the user ID and the custom session definitions defined per project
  • Enrichment : automatic breakdown of user agent and IP into device details and geo location.
  • User identities matching: unifying between several identities of the same user. The most common use case are anonymous and registered users that login in the middle of the session. Cooladata unifies the 2 users into one identity and stores the data accordingly. You can then query the entire journey of the user, from before he registered to after, understanding the complete picture.
  • Auto-population based on scope: each column is auto-populated based on the scope of the property – for instance: A FIRST USER scope property stores the first values received for this user in all of that user’s events, whereas LAST SESSION scope property stores the last non-null value in all the session’s events received after this value. This saves the analyst the effort of joining multiple events and tables. Each user also receives automatic property of create date, to easily extract the first timestamp the user was seen in the app and it’s first session is always marked as is_new=1.

2. Virtual Properties – Some properties require dynamic manipulations that are applied ad hoc while querying the data. This feature allows you to store SQL expressions as virtual properties, and select these properties in your reports or even filter by them in dashboards. The expression can contain a formula that hides the complex processing of a property. A common use case is total order amount calculations based on several columns or unifications of two columns using the IFNULL function.

3. Aggregation Tables – Aggregation Tables automatically run scheduled calculations and data aggregations and save them in a separate, permanent table. This can be used to compute and store aggregated data in order to enhance the performance of queries running on large scales of data. Most customer store aggregated tables based on several data sources and multiple joined tables. Querying these tables are easier then writing the complex joins and SQLs in each query.

4. Models- Models are designed to add R and Python capabilities to your Cooladata’s workflow. Models enable writing tasks based on R or Python scripts. This allows you to encompass the capabilities of these languages to manipulate your data and save the data frames created by those models into tables in your project.

Both the Aggregation Tables and Models, as well as the new retroactive event sender task, can be scheduled as steps in our Jobs to make up a repeating data manipulation workflow.

Print Friendly, PDF & Email