2.27.0 |
2.44.0 |
5.16.x |
Improve LPAI profiling initialization using ADSP/ENPU clock.
Corrected errors in the LPAI Backend Op Definition Supplement.
Performance improvements for certain model patterns.
Better support for advanced normalization operations.
More accurate handling of tensor layouts and shapes.
Fixes to ensure correct outputs for select model types (e.g., GRU).
Improved memory‑usage reporting for easier model planning.
Improved runtime stability and error handling.
Enhanced support for performance-related data (e.g., clock info).
Better consistency with platform requirements.
General fixes that improve execution reliability.
|
2.26.0 |
2.43.0 |
5.15.x |
Documentation enhancements for clarity and usability
Improved profiling by retrieving clock data during initialization
Updated internal system definitions (no user action required)
Corrected transpose operation handling before and after gather connections
Resolved Windows-specific conversion errors
|
2.25.0 |
2.42.0 |
5.14.x |
Improved memory handling and DMA job optimization for faster execution.
Enhanced constant tensor prefetch and alignment for better runtime performance.
Improved tensor ROI handling and synchronization in DSP engine.
Improved accuracy in model computations.
Fixed memory handling and alignment issues.
Enhanced stability in layer operations and data processing.
Documentation: Improved LPAI documentation for easier integration.
Documentation: Added instructions for Windows on Snapdragon (WoS) execution.
Documentation: Updated guides for backend upgrade and profiling steps.
|
2.24.0 |
2.41.0 |
5.13.4db9e20f |
Improved system stability and performance
Broader support for model types and data formats
Enhanced compatibility with customer platforms
Better handling of intermediate data for smoother execution
Expanded support for key operations to improve model flexibility
|
2.23.0 |
2.40.0 |
5.12.ee881b2e |
|
2.22.0 |
2.39.0 |
5.12.ee881b2e |
Add core selection config option
Support multi-type attributes
enable dspqueue for enpuV6 and enpuV5_1
|
2.21.0 |
2.38.0 |
5.11.2d1a28fc |
Profiling improvements
PSRAM memory support
Support Async Execution
|
2.20.0 |
2.37.0 |
5.10.8421ef45 |
|
2.19.0 |
2.36.0 |
5.9.0c847578 |
LPAI BE adapted to work with 5.9 enpu components
More profiling improvements
Reset Call handling improvements
Added OP support: RmsNorm, Buffer Op Padding Param
|
2.18.0 |
2.35.0 |
5.8.35a52877 |
LPAI BE adapted to work with 5.8 enpu components
More profiling improvements
Added OP support: StatelessLSTM/GRU
|
2.17.0 |
2.34.0 |
5.7.9931d1e5 |
Simplified Lpai BE library names
LPAI Documentation improvements
enpu compile,execute, simulator libraries update
LPAI BE dapted to work with 5.7 enpu components update
More profiling improvements
Added OP support:BatchToSpace,SpaceToBatch,DepthToSpace,SpaceToDepth,Quantize,Dequantize
|
2.16.0 |
2.32.0 |
5.6.057f0428 |
Add device discovery mechanism across different LPAI Supported HW
Add support for LPAI graph early termination
Fix runtime stability issues
Improved Profiling information
Added OP support: Channel_Shuffle
|
2.15.0 |
2.31.0 |
5.5.addf3789 |
|
2.14.0 |
2.30.0 |
5.4.ad7bd0ed |
|
2.13.1 |
2.29.0 |
4.14.f7310daf |
|
2.13.0 |
2.28.0 |
4.13.efc48af1 |
Align QNN LPAI config file defaults
Runtime check for compiled LPAI models older than v4.6, no longer supported
Prepare for LPAI Runtime Island Support
|
2.12.0 |
2.27.0 |
4.12.5bf898df |
Add profiling support: graph execute time
Bug fix for unsupported float tensor
Bug fix for reset tensors as input
|
2.11.0 |
2.26.1 |
4.11.7fa40f05 |
|
2.11.0 |
2.25.1 |
4.11.7fa40f05 |
Ops updates and bug-fixes: Split,LSTM,GRU,Batchnorm,Conv,StridedSlice,ReduceProd
Add profiling support: per layer execution time, layer fusion info, layer linking info
|
2.10.0 |
2.24.1 |
4.10.68490b7c |
Add Op support: Buffer (Framer)
PerLayer Profiling support via configs
Version Checks for LPAI Execute
Start support for lightweight internal aDSP build
Memory allocation for internal aDSP Execute
|
2.9.0 |
2.23.1 |
4.9.0xa3f3a016 |
Optimize memory planning
Fix memory leaks issue during model conversion
Add Op support: Power, Layernorm, Split
Fixed memory leak 32 properly if initialization is failed
Fixed stability issue related to multiple threads/processes
Add API profiler parameter
|
2.8.0 |
2.22.1 |
4.8.42ed4bf0-v79 |
|
2.7.1 |
2.20.1 |
3.17.4446b3115 |
|
2.7.0 |
2.18.1 |
3.17.4446b311 |
Added: Support to model visualizer, eNPU4 for linux_x86, windows,hexagonsim and adsp
Added: LPAI BE getSupportedOperations()
Fixes: TensorID duplicate errors during QNN change from tensor name to tensorID
Fixes: Sanitize quantizations with invalid parameters
|
2.6.0 |
2.17.1 |
3.16.ae82aa87 |
|
2.5.0 |
2.14.1 |
3.14.827f8f95 |
Fixes: Fix resize op to account for annotations on sizes tensor
Fixes: Deconv2D segfault data size calculated from model builder is incorrect.
Fixes: Fix segfault in case of min/max with constant tensor.
Fixes: Expand tensor shape when rank is different.
|
2.4.0 |
2.13.1 |
3.12.a4c0bd91 |
Fixes: Proper bias alignment for gemm with square output
Fixes: Improve layernorm accuracy for small hidden_size
Added: Support to different permutations of 1d bias matmul.
Added: New memory planning compiler pipeline
|
2.3.0 |
2.12.1 |
3.10.e4d3cfa4 |
|
2.2.0 |
2.11.1 |
3.8.74ea5d84 |
|
2.1.0 |
2.10.1 |
3.6.546129c7 |
Fixes: symmetric quantization and compression ratio calc
Added: support multiple nodes with same id, input name from onnx node for perlayer dump
|
2.0.0 |
2.9.1 |
|
|