-
Notifications
You must be signed in to change notification settings - Fork 680
feat: combine otlp ingestion #4865
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
this PR can fix the issue with #4487 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced logic for filtering keys in the stream comparison, allowing for more nuanced handling of disabled items. - **Bug Fixes** - Improved management of the `remove` array to ensure proper retention of items based on their presence in the `add` array. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…#4509) - dashboard show time range differ warning when cache data is used <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a button to notify users when displayed data is cached and differs from the selected time range. - Added a new event to handle updates regarding cached data differences in the dashboard components. - **Enhancements** - Improved caching mechanism to include time range specifications for better data management. - Enhanced responsiveness of components to changes in cached data states. - **Bug Fixes** - Streamlined handling of null values in configuration options to prevent undefined values from affecting logic. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
fix #4508 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced report scheduling functionality with clearer options for "Schedule now" and "Schedule later." - Default scheduling option updated to "Schedule now." - Introduced filtering capability for reports based on selected tabs ("scheduled" and "cached"). - Added sequential numbering to reports in the scheduled dashboards view. - **Bug Fixes** - Improved error handling during folder selection to manage issues gracefully. - **Refactor** - Updated data handling processes to utilize reactive references for better performance and reliability. - Enhanced report listing structure for improved layout and responsiveness. - **Chores** - Removed unnecessary console log statements to streamline functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced loading state visibility for dashboard components, improving user experience during data fetching. - Updated conditional rendering for dashboard elements to better reflect the presence of breakdown fields. - **Bug Fixes** - Corrected a tooltip text for improved clarity and grammatical accuracy. - **Style** - Improved layout and styling consistency across dashboard components, including class-based notifications and enhanced CSS properties. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
#4508 Showing the wrong last trigger date; it displays as 1970. An ID is being shown in the tab field. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Introduced a loading spinner for improved user experience during data fetching in the Scheduled Dashboards. - Enhanced the View Dashboard component to dynamically receive tab data, improving robustness. - **Improvements** - Renamed and clarified the `tabs` property to `reportTypeTabs` for better readability. - Updated timestamp handling for consistent formatting across reports, shifting from milliseconds to microseconds. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
- After calling `browser.close()`, `browser.wait()` needs to be called to avoid various zombie processes and close the browser properly. - In triggers usage report, this uses `cached_report` as module type for destination-less reports. - For list reports api, filter reports based on dashboard id. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new `CachedReport` variant to enhance report handling capabilities. - **Improvements** - Enhanced error handling and control flow in report generation to ensure browser operations complete before closure. - Improved logic for differentiating between cached and standard report triggers based on destination presence. These updates aim to increase the reliability and functionality of report generation within the application. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Currently for cron-based derived streams, we don't use timezone offset. Hence by default the cron expressions are assumed to be for UTC timezone. This PR sets the `tz_offset` field so that the cron expression works as per the browser timezone. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced routing data by including timezone offset information in the emitted data payload. - **Bug Fixes** - Improved data processing by integrating timezone context into the routing functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
#4508 Search is not working for folders, dashboards, and dashboard tabs in the "Select Dashboard" option. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Improved report filtering functionality to accurately track the total number of filtered reports. - Enhanced handling of scheduled reports to prevent conflicts during loading states. - **Bug Fixes** - Fixed logic to ensure scheduled reports are properly reset, enhancing clarity and functionality. - **Refactor** - Simplified filtering logic for better readability and maintainability in report options. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
- #4493 - on dashboard panel zoom, refresh whole dashboard <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Enhanced dashboard responsiveness by automatically refreshing data after selecting a custom date. - Improved handling of asynchronous updates for chart data in the Add Panel component. - Clarified rendering logic in the Scheduled Dashboards component for better user experience. - **Bug Fixes** - Streamlined control flow and formatting in the Add Panel component for improved readability and clarity. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Removed a console error message when no trace IDs are present, streamlining user experience. - Improved error handling in the panel data loader to prevent runtime exceptions when accessing error details. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
#4525 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Improvements** - Enhanced the report creation process by ensuring folder selection is completed before proceeding, improving user experience and reducing errors. - Refined logic for setting dashboard and tab parameters, ensuring dependencies are respected for a more reliable report generation. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced logic for handling cached data and query deltas to improve performance and data integrity. - Introduced logging functionality to monitor the state of deltas for better debugging and traceability. - **Bug Fixes** - Tightened conditions for adding deltas to prevent unnecessary entries, ensuring only relevant and unique deltas are processed. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
1. Some times the pagination partition bug occurred so this PR fixes that part <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Improved calculation of totals for partition details, enhancing accuracy and performance. - Streamlined logic for processing query results, making it clearer and more efficient. - **Bug Fixes** - Resolved issues related to total calculations by directly linking them to query results. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced traceability in logging by simplifying trace ID generation in the search function. - **Bug Fixes** - Improved control flow by ensuring release logic only executes on ingester nodes, preventing unintended operations on WAL files. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
This PR added a search in drop down of stream names <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced select input functionality with new properties: - `use-input` for custom value entry. - `fill-input` for immediate feedback on selections. - `hide-selected` to streamline dropdown display. - `:input-debounce="400"` for improved performance during input. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…4531) This PR adds "" around stream name to pass any keyword that has been used as a stream name <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved SQL query handling in the Add Alert feature to prevent syntax errors and enhance security against injection vulnerabilities. - **Style** - Enhanced code readability with minor formatting adjustments. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Improved delta calculation logic for more accurate data processing. - Enhanced caching mechanism to prioritize the earliest timestamps for better performance. - Introduced flexible value retrieval from the database based on configuration settings. - **Bug Fixes** - Fixed issues with gap detection in results to ensure valid data is considered. - Enhanced error handling during value retrieval to prevent state updates on failures. - **Refactor** - Streamlined conditions for calculating `delta_end_time` and cache starting points. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: hengfei yang <hengfei.yang@gmail.com>
…wn (#3693) - User Guide  - Example:  <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced drilldown functionality by adding logic to handle selected start and end times. - Updated user guide with detailed instructions on using selected time period variables in queries. - **Bug Fixes** - Improved handling of invalid date inputs to prevent errors during time selection. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
#4541 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Introduced an optional `timezone` field in alert and trigger condition configurations for enhanced flexibility. - Implemented timezone selection in alert creation and report scheduling processes, improving user experience. - Added a `q-select` component for dynamic timezone selection, allowing users to filter available timezones. - Enhanced functionality to retrieve timezones based on UTC offset, improving time-sensitive data handling. - Improved timezone management capabilities in stream routing and scheduling features. - Added a new utility function for converting formatted date and time strings into Unix timestamps while accommodating various timezones. - **Bug Fixes** - Enhanced logic for handling timezone defaults and user input validation across various components. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: omkark06 <omkar@zinclabs.io>
- #4552 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced `query` and `query_encoded` properties for improved handling of drilldown variables. - Added a new section in the user guide detailing how to reference the current query. - **Bug Fixes** - Minor logic enhancements and formatting adjustments for better readability and consistency. - **Documentation** - Updated user guide to include instructions for using the current query formats. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…auto apply was false by default (#4556) - #4557 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced the DateTimePicker component with a dynamic `auto-apply` feature for improved user experience. - Introduced automatic application of dashboard settings in the General Settings component. - **Bug Fixes** - Minor formatting adjustments made for improved code consistency. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
impl #4914 new parameters acceptable for search queries to add additional context in usage report: **Dashboard** `dashboard_id` `dashboard_name` `dashboard_folder` `dashboard_folder_id` `dashboard_folder_name` **Report** `report_key` **Alert** `alert_key` Co-authored-by: Sai Nikhil <nikhil@openobserve.ai> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: Hengfei Yang <hengfei.yang@gmail.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a garbage collection mechanism for the local cache, enhancing resource management. - **Improvements** - Enhanced error handling during configuration initialization for better debugging. - Updated time calculations to improve accuracy and consistency. - **Dependency Updates** - Updated versions of key dependencies to leverage new features and fixes. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: Yashodhan Joshi <yjdoc2@gmail.com>
1327be9 to
f18af4e
Compare
Signed-off-by: Yashodhan Joshi <yjdoc2@gmail.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced test coverage for dashboard UI with new Playwright tests focusing on the transpose feature and dynamic columns. - Added `data-test` attributes to various components in the General Settings for improved testability. - **Bug Fixes** - Improved consistency in the GitHub Actions workflow configuration for UI tests. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: ktx-vaidehi <vaidehi.akhani@kiara.tech>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (5)
src/service/logs/otlp_grpc.rs (3)
129-137: Consider optimizing attribute handlingThe current implementation creates a new String for each non-service.name attribute. Consider pre-allocating the string capacity.
- let key = format!("{}_{}", SERVICE, item.key); + let mut key = String::with_capacity(SERVICE.len() + 1 + item.key.len()); + key.push_str(SERVICE); + key.push('_'); + key.push_str(&item.key);
166-169: Enhance debug logging for skipped recordsThe current debug log could be more informative by including the timestamp details.
- log::debug!("skipping log record due to old ts"); + log::debug!("skipping log record with timestamp {} (min allowed: {})", timestamp, min_ts);
Test coverage needs improvement
The current test suite only covers the basic happy path. Additional test cases should be added for:
- Error handling in
encode_responsefunction- Service name attribute handling and flattening in resource attributes
- Pipeline execution paths
- Error cases for old timestamps
- Trace and span ID handling
- Various input validation scenarios
Example locations that need coverage:
encode_responsefunction (line ~50)- Resource attribute processing (lines ~150-170)
- Pipeline execution logic (lines ~270-320)
- Timestamp validation (lines ~180-190)
🔗 Analysis chain
Line range hint
426-516: Enhance test coverageThe test suite should be expanded to cover:
- Error cases in encode_response
- Service name attribute handling
- Key flattening scenarios
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check test coverage for the modified functions rg -A 5 "^#\[test\]" src/service/logs/otlp_grpc.rsLength of output: 52
Script:
#!/bin/bash # Let's try a different approach to find test functions and their coverage ast-grep --pattern $'#[test] async fn test_$_($$$) { $$$ }'Length of output: 62
Script:
#!/bin/bash # Let's check the content of the file to analyze test coverage cat src/service/logs/otlp_grpc.rsLength of output: 18837
src/handler/http/request/metrics/ingest.rs (1)
81-81: Adjust log levels to 'debug' for content type logging.Logging content type handling at the 'info' level may clutter the logs with routine information. Consider using the 'debug' level to keep the logs focused on more significant events.
Apply these diffs:
For line 81:
-log::info!("otlp_metrics_write: got proto type content"); +log::debug!("otlp_metrics_write: received protobuf content");For line 92:
-log::info!("otlp_metrics_write: got json type content"); +log::debug!("otlp_metrics_write: received JSON content");Also applies to: 92-92
src/handler/http/request/logs/ingest.rs (1)
Line range hint
280-282: Prevent panics by safely handling header retrievals.Unwrapping header values using
.unwrap()can lead to panics if the headers are missing or contain invalid data. It's safer to handle these potential errors to maintain application stability.Apply this diff to safely retrieve header values:
- let content_type = req.headers().get("Content-Type").unwrap().to_str().unwrap(); - let user_email = req.headers().get("user_id").unwrap().to_str().unwrap(); + let content_type = match req.headers().get("Content-Type") { + Some(value) => match value.to_str() { + Ok(s) => s, + Err(_) => { + return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( + http::StatusCode::BAD_REQUEST.into(), + "Invalid Content-Type header value".to_string(), + ))) + } + }, + None => { + return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( + http::StatusCode::BAD_REQUEST.into(), + "Missing Content-Type header".to_string(), + ))) + } + }; + let user_email = match req.headers().get("user_id") { + Some(value) => match value.to_str() { + Ok(s) => s, + Err(_) => { + return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( + http::StatusCode::BAD_REQUEST.into(), + "Invalid user_id header value".to_string(), + ))) + } + }, + None => { + return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( + http::StatusCode::BAD_REQUEST.into(), + "Missing user_id header".to_string(), + ))) + } + };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
src/handler/http/request/logs/ingest.rs(4 hunks)src/handler/http/request/metrics/ingest.rs(3 hunks)src/service/logs/mod.rs(0 hunks)src/service/logs/otlp_grpc.rs(9 hunks)src/service/logs/otlp_http.rs(0 hunks)src/service/metrics/mod.rs(0 hunks)src/service/metrics/otlp_http.rs(0 hunks)
💤 Files with no reviewable changes (4)
- src/service/logs/mod.rs
- src/service/logs/otlp_http.rs
- src/service/metrics/mod.rs
- src/service/metrics/otlp_http.rs
🧰 Additional context used
📓 Path-based instructions (3)
src/handler/http/request/logs/ingest.rs (1)
Pattern **/*.rs: You are a smart rustlang pull request reviewer.
You are going to review all the rustlang files.
Be concise, and add a brief explanation to your suggestions
Make sure the responses are not too verbose and keep the suggestions to the point i.e. actionable.
src/handler/http/request/metrics/ingest.rs (1)
Pattern **/*.rs: You are a smart rustlang pull request reviewer.
You are going to review all the rustlang files.
Be concise, and add a brief explanation to your suggestions
Make sure the responses are not too verbose and keep the suggestions to the point i.e. actionable.
src/service/logs/otlp_grpc.rs (1)
Pattern **/*.rs: You are a smart rustlang pull request reviewer.
You are going to review all the rustlang files.
Be concise, and add a brief explanation to your suggestions
Make sure the responses are not too verbose and keep the suggestions to the point i.e. actionable.
🔇 Additional comments (1)
src/service/logs/otlp_grpc.rs (1)
181-183: 🛠️ Refactor suggestion
Add error handling for key flattening
The key flattening operation should handle potential errors to prevent panics.
- let mut key = item.key.clone();
- flatten::format_key(&mut key);
+ let mut key = item.key.clone();
+ if let Err(e) = flatten::format_key(&mut key) {
+ log::warn!("Failed to flatten key '{}': {}", item.key, e);
+ continue;
+ }Likely invalid or redundant comment.
| return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( | ||
| http::StatusCode::BAD_REQUEST.into(), | ||
| format!("Invalid json: {}", e), | ||
| ))); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct error message for protobuf parsing failures.
When decoding protobuf content fails, the error message should indicate that the protobuf data is invalid, not that it's invalid JSON. This provides accurate feedback to the client.
Apply this diff to fix the error message:
- format!("Invalid json: {}", e),
+ format!("Invalid protobuf data: {}", e),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( | |
| http::StatusCode::BAD_REQUEST.into(), | |
| format!("Invalid json: {}", e), | |
| ))); | |
| } | |
| return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( | |
| http::StatusCode::BAD_REQUEST.into(), | |
| format!("Invalid protobuf data: {}", e), | |
| ))); | |
| } |
| match config::utils::json::from_slice(body.as_ref()) { | ||
| Ok(v) => v, | ||
| Err(e) => { | ||
| return Ok(HttpResponse::BadRequest().json(MetaHttpResponse::error( | ||
| http::StatusCode::BAD_REQUEST.into(), | ||
| format!("Invalid json: {}", e), | ||
| ))); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Inconsistent deserialization type detected
The code shows a potential type mismatch in the deserialization flow:
- The file imports
ExportMetricsServiceRequestfrom OpenTelemetry protobuf - The deserialized JSON is passed directly to
handle_grpc_requestwhich expectsExportMetricsServiceRequest - However,
config::utils::json::from_sliceis not explicitly typed to deserialize intoExportMetricsServiceRequest
This could lead to runtime errors if the JSON deserialization doesn't match the expected type structure.
Suggested fixes:
- Add explicit type annotation:
let metrics: ExportMetricsServiceRequest = config::utils::json::from_slice(body.as_ref())?; - Or implement proper JSON deserialization for
ExportMetricsServiceRequestif not already present
🔗 Analysis chain
Ensure consistent deserialization into ExportMetricsServiceRequest.
Verify that config::utils::json::from_slice correctly deserializes JSON content into an ExportMetricsServiceRequest instance, matching the type expected by handle_grpc_request. This ensures type consistency and prevents runtime errors.
Please run the following script to confirm that config::utils::json::from_slice deserializes into ExportMetricsServiceRequest:
This will help ensure that the JSON parsing aligns with the expected data structures.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that JSON deserialization returns ExportMetricsServiceRequest
# Search for uses of `config::utils::json::from_slice` related to metrics
rg -A 3 'config::utils::json::from_slice' src/service/metrics/
# Check if `ExportMetricsServiceRequest` is the deserialized type
rg -A 5 'impl ExportMetricsServiceRequest' src/
# Verify the deserialization implementation
ast-grep --pattern $'impl Deserialize for ExportMetricsServiceRequest { $$$ }'
Length of output: 192
Script:
#!/bin/bash
# Let's try a broader search to understand the type and deserialization
# Find the type definition and usage of ExportMetricsServiceRequest
rg -A 3 "ExportMetricsServiceRequest"
# Look for the json deserialization implementation in config utils
rg -A 3 "from_slice" src/config/utils/
# Check the actual type annotation in the ingest handler
rg -A 5 "fn handle_grpc_request" src/
Length of output: 5627
Script:
#!/bin/bash
# Let's check the specific file content to understand the deserialization flow
# Check the content of the ingest.rs file
rg -A 10 "match config::utils::json::from_slice" src/handler/http/request/metrics/ingest.rs
# Look for the function signature that contains this deserialization
rg -B 5 "match config::utils::json::from_slice" src/handler/http/request/metrics/ingest.rs
# Check if there's any type annotation for the deserialization
ast-grep --pattern 'let $_: ExportMetricsServiceRequest = config::utils::json::from_slice($_)'
Length of output: 953
Script:
#!/bin/bash
# Let's check the entire function context to understand the type flow
# Get the full function context
rg -B 20 -A 20 "async fn write" src/handler/http/request/metrics/ingest.rs
# Check where the deserialized value is used
rg -A 10 "handle_grpc_request" src/handler/http/request/metrics/ingest.rs
# Look for any type declarations or imports at the top of the file
rg -A 10 "^use" src/handler/http/request/metrics/ingest.rs
Length of output: 1284
Fixes #4982 - Adds migration for pipelines for ofga tuple database. - Fixes some rbac permission logic for pipelines. Co-authored-by: Taiming Liu <liutaiming3@gmail.com>
#4936 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Release Notes - **New Features** - Introduced a new component, **RunningQueries**, for managing and displaying running queries with enhanced filtering options. - Added a **RunningQueriesList** component for displaying query data with pagination and action capabilities. - **Improvements** - Enhanced layout and styling of the **QueryList** component for better user experience. - Streamlined data handling in **RunningQueriesList** by utilizing props and emitted events, improving flexibility. - **Localization** - Added new localization keys for query metrics, including "Number of Queries," "Total Exec. Duration," "Total Query Range," and "Search Type," to improve user interface clarity. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Release Notes - **New Features** - Enhanced `SessionViewer` component for improved readability in method calls. - Additional fields in stream settings for better configuration management. - **Bug Fixes** - Adjusted expected outputs in test cases for `LogStream` and `Schema` components. - **Refactor** - Updated dependencies for testing tools, transitioning from Jest to Vitest. - Improved structure and presentation in the `schema.vue` component. - **Chores** - Modified testing configuration paths and updated inline dependencies. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
impl #4974 envs to configure usage reporting jobs: 1. `ZO_USAGE_REPORTING_THREAD_NUM`: number of threads handling the background usage reporting job. default: 0 -> local_mode: `cpu_num / 2` -> distributed: `cpu_num` 2. `ZO_USAGE_BATCH_SIZE`: num of usage data to be buffered in memory before all buffered is ingested into usage org default: 2,000 applied to each thread 3. `ZO_USAGE_PUBLISH_INTERVAL`: max num of seconds between each ingestion, regardless if `ZO_USAGE_BATCH_SIZE` is met or not. default: 60s applied to each thread
Fixes #4999 - Ownership should be set only when writing to metadata streams for the first time. - For distinct values, the ownership should set while flush and not write. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Enhanced logic for schema creation and ownership management in the database. - New boolean variable `_is_new` added to track first-time schema initialization. - **Bug Fixes** - Corrected a typo in the shutdown event type from `Shutudown` to `Shutdown`. - **Documentation** - Updated method signatures to reflect changes in functionality and return types. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
test case report e2e <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Release Notes - **New Features** - Added a new test suite for report functionalities, including creating and deleting dashboards and reports. - Introduced a new `IngestionPage` class for handling data ingestion through an API. - Enhanced `DashboardPage` with improved dashboard creation and deletion processes. - Updated `ReportsPage` to support report creation and verification with new methods. - **Bug Fixes** - Improved element selection methods for better reliability in the login process. - **Documentation** - Streamlined import statements and improved code organization for better readability. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…and log detail (#5011) #5010 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced new methods for enhanced log data handling, including `quoteTableNameDirectly`, `getRegionInfo`, and `addTraceId`. - **Improvements** - Enhanced error handling and state management for better user experience. - Refined query construction logic to optimize performance. - Updated list presentations in configuration components for improved styling. - **Style** - Improved formatting and readability in the `JsonPreview`, `KubernetesConfig`, `LinuxConfig`, and `WindowsConfig` components, enhancing code maintainability. - Added SCSS support for styling in `KubernetesConfig`. - **Localization** - Added a new label for the search functionality: "Original". <!-- end of auto-generated comment: release notes by coderabbit.ai -->
234b054 to
28b88ee
Compare
fixes #4794
This merges the logic of json and proto format otlp metrics and logs ingestion. That way we only have a single function handling the processing and consuming the logs. Needs more testing.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Refactor
Documentation