3/18/2023 0 Comments Tf permute![]() Īny unrecognized layers/operators will be converted to TIDL_UnsupportedLayer as a place-holder. Please refer to TIDL: metaArch User Guide. You can also use metaArch to implement SSD applications.You have to implement them all outside TIDL. If DetectionOut layer is not used, Permute & PriorBox & Reshape are not allowed to use.Permute operation will be done inside DetectionOutput Layer. Permute Layer will be removed automatically by import tool.Only supported with Detection out layer in SSD context. PriorBox branch will be removed automatically and prior box info will be written into detection out layer.Reshape & Softmax will be integrated into DetectionOut layer automatically done by import tool. Concat + Reshape + Softmax + DetectionOut in SSD context will be automatically change to Concat + DetectionOut.TIDL import tool will search for priorbox & permute & Reshape based on this layer. DetectionOutput layer is an important mark for SSD context.DetectionOuput, Permute, PriorBox, Reshape should be used along with each other.Comment 1: SSD(DetectionOutput/Permute/Priorbox/Reshape) usage.This layer is only used in training, and this layer will be automatically removed during import process. Padding will be taken care of during import process, and this layer will be automatically removed by import tool. Split layer will be removed after import. Only support axis = 1, mainly for the last layer of sematic segmentation. Concat will be width-wise if coming after a flatten layer. Only 4x4 kernel with 2x2 stride is supportedĬoncat will do channel-wise combination by default. Recommend to use Resize/Upsample to get better performance. Please use global pooling/flatten before softmax. Please use global pooling/flatten before innerproduct.įeature size larger than 2048*2048 is not optimal. Validated pooling size: 1x1(MAX, stride 1x1/2x2), 2x2, 3x3. All the channel-wise Broad cast operations are mapped to BN now. ReLU & Scale & Bias & PReLU will be merged & imported as BN.ġ6bit is not optimal for current version. If kernelH*kernelW*input channel/groupNum+enableBias % 64 != 0, performance is not optimal. If stride = 2, kernel should less than 7. If stride = 4, only support kernel = 11x11. Validated kernel size: 1x1, 3x3, 5x5, 7x7. 1x1 conv will be converted to innerproduct. ReLU & BN & Pooling will be merged into conv to get better performance. Regular & depth-wise conv will be imported as conv. More networks (Including custom networks) will be supported in the future releases. Any Network which is not included in this list will NOT be supported In the current release, Only models/networks listed in the Sample Pre-trained CNN Models for TIDL are supported. Please refer the Sample Pre-trained CNN Models for TIDL for full List.6+ Pixel Wise Segmentation models trained on Caffe, ONNX, tensorflow.8+ SSD based Object detection network trained on Caffe, ONNX, tensorflow Object Detection API.19 Classification networks across (Caffe, tensorflow and ONNX).Validated Networks on Host/PC Emulation and EVM Faster RCNN ROI Pooling (As defined in TF Object detection API).Detection output Layer (SSD - Post Processing As defined in caffe-Jacinto and TF Object detection API). ![]() Re-size Layer (For Bi-leaner/Nearest Neighbor Up-sample).Import tool will translate/import the training result file(prototxt+caffemodel, pb, onnx) to TIDL format. By default, TIDL releases include import tool supporting Caffe, Tensorflow, ONNX, tflite. This page lists layers/operators supported in current TIDL version.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |