V-Spark Online Help

V‑Spark 3.5.0 Release Notes

V‑Spark Version 3.5.0 provides improvements in a number of areas.

  1. V‑Spark now supports distributed processing across multiple networked "nodes". This increases the availability, performance, and maintainability of your V‑Spark installation while providing a single integrated interface.

    • Running multiple components on multiple hosts increases availability by duplicating vital functions. Components can be started and stopped individually, enabling you to update single components without disabling the entire system.

    • Nodes share a modular storage architecture and distributed filesystem. This enables the system to handle uploads and downloads from more clients without storage bottlenecks.

    • V‑Spark now uses an improved queue-based processing flow. This reduces resource contention, increases the system's ability to utilize multiple resources, and improves your ability to monitor the progress of your data.

    If your V‑Spark processing needs exceed the computing capacity of one host, contact support@vocitec.com for more information about multi-node processing.

  2. There have been multiple changes and enhancements made to the V‑Spark REST API. If your workflow depends on the API, you will want to review this section carefully.

    1. Log API no longer supported - The functionality of the /log API has been incorporated into the /request API, and the /log API is now deprecated and its jobmanager "processed.log" files are no longer supported. Any external tools that depend on this API call will need to be re-written.

    2. Status API modified - The JSON schema of the data returned by the /status API has been changed. The organization and content of the output has been significantly modified, and the value of the queued count now reports the number of queued requests and not the number of queued files. Any external tools that depend on this API call may need to be modified.

    3. Partial /config calls may now be posted for companies - When using the V‑Spark API to create a new company, it is no longer necessary to POST a complete JSON configuration. You may now POST a partial config that only includes mandatory values for the new company. When using the V‑Spark API to modify a company's configuration, it is no longer necessary to POST a complete config that includes even unchanged values. You may now POST a partial config that only includes any changes to existing configuration values.

      This change only applies to company configurations. Posting of partial configuration updates for organizations, folders, applications, or users using the REST API are not supported.

    4. New /config/perms API - The REST API now has a /config/perms call that returns information about which users have which permissions to which parts of the system.

    5. Error message for invalid JSON - any V‑Spark API that accepts JSON content does a check to determine the validity of any JSON content that is POSTed, and returns an Invalid JSON error if the JSON is not valid.

    6. Requests to the /transcribe API that include the s3key option to submit a file stored in AWS S3 must specify the Amazon S3 region of the S3 bucket using the region option or the request fails silently.

    7. Failed File Details in /request- The V‑Spark /request endpoint displays files that were unable to be transcribed. Details explaining why the files were not transcribed are included; for example, the file type was not supported or the file contained bad metadata.

  3. Data loading directly from file system no longer supported - V‑Spark no longer supports direct ingestion of data from the local filesystem. Always upload audio via the V‑Spark UI or API so that all inputs are properly queued and logged. If your workflow requires file system audio upload, contact support@vocitec.com to discuss solutions.

  4. Improvements to Folder processing status dialog - The ASR processing status section of this dialog has been improved to better show the progress of requests and files through V‑Spark's processing queues. The Queued column now shows the number of processing requests waiting to be processed, and not the number of files in the queue.

  5. The following improvements have been made to V‑Spark Application building and processing.

    1. Reprocess Applications by date range - When reprocessing Applications, you may now choose a date range instead of simply a start date for reprocessing. Only files within the selected date range will be reprocessed. You may still choose to have all files reprocessed, regardless of date.

    2. Category names can now include Unicode characters - The names of Application Categories may now include any character in the UTF-8 Unicode Standard. This includes accented characters and non-English characters of many types. The only restricted character is "." (Unicode 002E) which is used in API searches. New categories can be created with these characters in their names, and existing categories can be modified using the Application Editor so that their names better match the style of their source language.

    3. Metadata filter import and export - Exporting a single category now includes metadata filters. This makes it easier to build applications and categories, as you can now export and import metadata filters, including both built-in and custom metadata filters.

  6. V‑Cloud tokens are now verified when entered - When you create a new Company that has a V‑Cloud authorization token, add a V‑Cloud authorization token to an existing Company, or update a Company's V‑Cloud authorization token, the system now verifies connectivity to V‑Cloud and the validity of the token before enabling V‑Cloud for the Company. This check occurs whether you are working through the V‑Spark user interface or the API.

  7. Callback Improvements - Callback status messages were improved to include more specific information about status codes and whether the callback failed or succeeded. The improved callback messages will also make it easier to identify company, organization, and folder names in log entries.

  8. Delete Files from V‑Spark - Transcript files can be deleted using the V‑Spark API or user interface. Deleted files will no longer be available in the file list /transcript view. Note that the summary data, stats, and dashboard views will not be updated to reflect deletion.

  9. For greater system security, systems administrators can now configure a set of password policies.

    1. Minimum password length - Administrators can now configure a minimum password length. Account passwords cannot be set or changed to passwords that do not have at least the required number of characters. The minimum password length is configurable, and defaults to 7 characters.

    2. Password aging - Passwords can now expire after a configurable number of days. The default is for passwords to expire after 90 days. When an account password expires, the next time the user attempts to log in they will be redirected to a page where they will update their account password. This policy can also be turned off so that passwords never expire.