当前位置: 首页>C++>正文

Metal(六) 案例之視頻文件的渲染

Metal(六) 案例之視頻文件的渲染

本案例實現使用Metal讀取視頻文件,并渲染到屏幕上。(此時顯示是沒有聲音的)
思路:

  1. 使用AVFundation中的AVAssetReaderTrackOutput方法,并將原始數據傳入到CMSempleBuffer中。CMSempleBuffer存儲的是每一幀的數據。
  2. Metal渲染回調將 CMSempleBuffer中的數據轉成CVPixelBufferRef。
  3. 使用CoreVideo獲取Y紋理和UV紋理。
  4. 自定義片元函數將YUV轉成RGBA,顯示出來。

1. 頂點函數和片元函數

  • 定點函數

在頂點函數中需要傳入頂點坐標和紋理坐標。

//結構體(用于頂點函數輸出/片元函數輸入)
typedef struct
{float4 clipSpacePosition [[position]]; // position的修飾符表示這個是頂點float2 textureCoordinate; // 紋理坐標} RasterizerData;//RasterizerData 返回數據類型->片元函數
// vertex_id是頂點shader每次處理的index,用于定位當前的頂點
// buffer表明是緩存數據,0是索引
vertex RasterizerData
vertexShader(uint vertexID [[ vertex_id ]],constant CCVertex *vertexArray [[ buffer(CCVertexInputIndexVertices) ]])
{RasterizerData out;//頂點坐標out.clipSpacePosition = vertexArray[vertexID].position;//紋理坐標out.textureCoordinate = vertexArray[vertexID].textureCoordinate;return out;
}
  • 片元函數
    YUV轉RGB使用轉換矩陣:
float3 rgb = convertMatrix->matrix * (yuv + convertMatrix->offset);

在片元函數中需要傳入Y、UV紋理和轉換矩陣。

fragment float4
samplingShader(RasterizerData input [[stage_in]],texture2d<float> textureY [[ texture(CCFragmentTextureIndexTextureY) ]],texture2d<float> textureUV [[ texture(CCFragmentTextureIndexTextureUV) ]],constant CCConvertMatrix *convertMatrix [[ buffer(CCFragmentInputIndexMatrix) ]])
{//1.獲取紋理采樣器constexpr sampler textureSampler (mag_filter::linear,min_filter::linear);/*2. 讀取YUV 顏色值textureY.sample(textureSampler, input.textureCoordinate).r從textureY中的紋理采集器中讀取,紋理坐標對應上的R值.(Y)textureUV.sample(textureSampler, input.textureCoordinate).rg從textureUV中的紋理采集器中讀取,紋理坐標對應上的RG值.(UV)*/float3 yuv = float3(textureY.sample(textureSampler, input.textureCoordinate).r,textureUV.sample(textureSampler, input.textureCoordinate).rg);//3.將YUV 轉化為 RGB值.convertMatrix->matrix * (YUV + convertMatrix->offset)float3 rgb = convertMatrix->matrix * (yuv + convertMatrix->offset);//4.返回顏色值(RGBA)return float4(rgb, 1.0);
}

2. 設置紋理

在這里插入圖片描述

  • 從現有圖像緩沖區創建核心視頻Metal紋理緩沖區。 將每一幀的數據從緩存中復制到Metal的紋理緩沖區。
參數1: allocator 內存分配器,默認kCFAllocatorDefault參數2: textureCache 紋理緩存區對象參數3: sourceImage 視頻圖像緩沖區參數4: textureAttributes 紋理參數字典.默認為NULL參數5: pixelFormat 圖像緩存區數據的Metal 像素格式常量.注意如果MTLPixelFormatBGRA8Unorm和攝像頭采集時設置的顏色格式不一致,則會出現圖像異常的情況;參數6: width,紋理圖像的寬度(像素)參數7: height,紋理圖像的高度(像素)參數8: planeIndex.如果圖像緩沖區是平面的,則為映射紋理數據的平面索引。對于非平面圖像緩沖區忽略。參數9: textureOut,返回時,返回創建的Metal紋理緩沖區。// Mapping a BGRA buffer:CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &outTexture);// Mapping the luma plane of a 420v buffer:CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatR8Unorm, width, height, 0, &outTexture);// Mapping the chroma plane of a 420v buffer as a source texture:CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatRG8Unorm width/2, height/2, 1, &outTexture);// Mapping a yuvs buffer as a source texture (note: yuvs/f and 2vuy are unpacked and resampled -- not colorspace converted)CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatGBGR422, width, height, 1, &outTexture);
  • 從CMSampleBuffer讀取CVPixelBuffer
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
  • 返回紋理緩沖區的Metal紋理對象。 將臨時的紋理對象賦值到全局的紋理對象。
CVMetalTextureGetTexture(tmpTexture);

示例代碼:(示例代碼使用的OC)

 //1.從CMSampleBuffer讀取CVPixelBuffer,CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);id<MTLTexture> textureY = nil;id<MTLTexture> textureUV = nil;//textureY 設置{//2.獲取紋理的寬高size_t width = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0);size_t height = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0);//3.像素格式:普通格式,包含一個8位規范化的無符號整數組件。MTLPixelFormat pixelFormat = MTLPixelFormatR8Unorm;//4.創建CoreVideo的Metal紋理CVMetalTextureRef texture = NULL;/*5. 根據視頻像素緩存區 創建 Metal 紋理緩存區*/CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, self.textureCache, pixelBuffer, NULL, pixelFormat, width, height, 0, &texture);//6.判斷textureCache 是否創建成功if(status == kCVReturnSuccess){//7.轉成Metal用的紋理textureY = CVMetalTextureGetTexture(texture);//8.使用完畢釋放CFRelease(texture);}}//9.textureUV 設置(同理,參考于textureY 設置){size_t width = CVPixelBufferGetWidthOfPlane(pixelBuffer, 1);size_t height = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1);MTLPixelFormat pixelFormat = MTLPixelFormatRG8Unorm;CVMetalTextureRef texture = NULL;CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, self.textureCache, pixelBuffer, NULL, pixelFormat, width, height, 1, &texture);if(status == kCVReturnSuccess){textureUV = CVMetalTextureGetTexture(texture);CFRelease(texture);}}//10.判斷textureY 和 textureUV 是否讀取成功if(textureY != nil && textureUV != nil){//11.向片元函數設置textureY 紋理[encoder setFragmentTexture:textureY atIndex:CCFragmentTextureIndexTextureY];//12.向片元函數設置textureUV 紋理[encoder setFragmentTexture:textureUV atIndex:CCFragmentTextureIndexTextureUV];}//13.使用完畢,則將sampleBuffer 及時釋放CFRelease(sampleBuffer); 

3. Metal渲染

在這里插入圖片描述
metal渲染的具體邏輯步驟請看博客:
https://blog.csdn.net/weixin_40918107/article/details/108135662
metalYUV轉RGB具體邏輯步驟請看博客https://blog.csdn.net/weixin_40918107/article/details/108269790

本片文章中就不會重新分析每個步驟,而是直接上代碼:

  • 設置MTKView
//1.初始化mtkViewself.mtkView = [[MTKView alloc] initWithFrame:self.view.bounds];// 獲取默認的deviceself.mtkView.device = MTLCreateSystemDefaultDevice();//設置self.view = self.mtkView;self.view = self.mtkView;//設置代理self.mtkView.delegate = self;//獲取視口sizeself.viewportSize = (vector_uint2){self.mtkView.drawableSize.width, self.mtkView.drawableSize.height};
  • 設置渲染管道
 //1 獲取.metal/*newDefaultLibrary: 默認一個metal 文件時,推薦使用newLibraryWithFile:error: 從Library 指定讀取metal 文件newLibraryWithData:error: 從Data 中獲取metal 文件*/id<MTLLibrary> defaultLibrary = [self.mtkView.device newDefaultLibrary];// 頂點shader,vertexShader是函數名id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:@"vertexShader"];// 片元shader,samplingShader是函數名id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:@"samplingShader"];//2.渲染管道描述信息類MTLRenderPipelineDescriptor *pipelineStateDescriptor = [[MTLRenderPipelineDescriptor alloc] init];//設置vertexFunctionpipelineStateDescriptor.vertexFunction = vertexFunction;//設置fragmentFunctionpipelineStateDescriptor.fragmentFunction = fragmentFunction;// 設置顏色格式pipelineStateDescriptor.colorAttachments[0].pixelFormat = self.mtkView.colorPixelFormat;//3.初始化渲染管道根據渲染管道描述信息// 創建圖形渲染管道,耗性能操作不宜頻繁調用self.pipelineState = [self.mtkView.device newRenderPipelineStateWithDescriptor:pipelineStateDescriptorerror:NULL];//4.CommandQueue是渲染指令隊列,保證渲染指令有序地提交到GPUself.commandQueue = [self.mtkView.device newCommandQueue];
  • 設置頂點
//1.頂點坐標(x,y,z,w);紋理坐標(x,y)//注意: 為了讓視頻全屏鋪滿,所以頂點大小均設置[-1,1]static const CCVertex quadVertices[] ={   // 頂點坐標,分別是x、y、z、w;    紋理坐標,x、y;{ {  1.0, -1.0, 0.0, 1.0 },  { 1.f, 1.f } },{ { -1.0, -1.0, 0.0, 1.0 },  { 0.f, 1.f } },{ { -1.0,  1.0, 0.0, 1.0 },  { 0.f, 0.f } },{ {  1.0, -1.0, 0.0, 1.0 },  { 1.f, 1.f } },{ { -1.0,  1.0, 0.0, 1.0 },  { 0.f, 0.f } },{ {  1.0,  1.0, 0.0, 1.0 },  { 1.f, 0.f } },};//2.創建頂點緩存區self.vertices = [self.mtkView.device newBufferWithBytes:quadVerticeslength:sizeof(quadVertices)options:MTLResourceStorageModeShared];//3.計算頂點個數self.numVertices = sizeof(quadVertices) / sizeof(CCVertex);
  • 設置YUV轉RGB轉換的矩陣
//1.轉化矩陣// BT.601, which is the standard for SDTV.matrix_float3x3 kColorConversion601DefaultMatrix = (matrix_float3x3){(simd_float3){1.164,  1.164, 1.164},(simd_float3){0.0, -0.392, 2.017},(simd_float3){1.596, -0.813,   0.0},};// BT.601 full rangematrix_float3x3 kColorConversion601FullRangeMatrix = (matrix_float3x3){(simd_float3){1.0,    1.0,    1.0},(simd_float3){0.0,    -0.343, 1.765},(simd_float3){1.4,    -0.711, 0.0},};// BT.709, which is the standard for HDTV.matrix_float3x3 kColorConversion709DefaultMatrix[] = {(simd_float3){1.164,  1.164, 1.164},(simd_float3){0.0, -0.213, 2.112},(simd_float3){1.793, -0.533,   0.0},};//2.偏移量vector_float3 kColorConversion601FullRangeOffset = (vector_float3){ -(16.0/255.0), -0.5, -0.5};//3.創建轉化矩陣結構體.CCConvertMatrix matrix;//設置轉化矩陣/*kColorConversion601DefaultMatrix;kColorConversion601FullRangeMatrix;kColorConversion709DefaultMatrix;*/matrix.matrix = kColorConversion601FullRangeMatrix;//設置offset偏移量matrix.offset = kColorConversion601FullRangeOffset;//4.創建轉換矩陣緩存區.self.convertMatrix = [self.mtkView.device newBufferWithBytes:&matrixlength:sizeof(CCConvertMatrix)options:MTLResourceStorageModeShared];
  • Draw
 //1.每次渲染都要單獨創建一個CommandBufferid<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];//獲取渲染描述信息MTLRenderPassDescriptor *renderPassDescriptor = view.currentRenderPassDescriptor;//2. 從CCAssetReader中讀取圖像數據CMSampleBufferRef sampleBuffer = [self.reader readBuffer];//3.判斷renderPassDescriptor 和 sampleBuffer 是否已經獲取到了?if(renderPassDescriptor && sampleBuffer){//4.設置renderPassDescriptor中顏色附著(默認背景色)renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0.0, 0.5, 0.5, 1.0f);//5.根據渲染描述信息創建渲染命令編碼器id<MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];//6.設置視口大小(顯示區域)[renderEncoder setViewport:(MTLViewport){0.0, 0.0, self.viewportSize.x, self.viewportSize.y, -1.0, 1.0 }];//7.為渲染編碼器設置渲染管道[renderEncoder setRenderPipelineState:self.pipelineState];//8.設置頂點緩存區[renderEncoder setVertexBuffer:self.verticesoffset:0atIndex:CCVertexInputIndexVertices];//9.設置紋理(將sampleBuffer數據 設置到renderEncoder 中)[self setupTextureWithEncoder:renderEncoder buffer:sampleBuffer];//10.設置片元函數轉化矩陣[renderEncoder setFragmentBuffer:self.convertMatrixoffset:0atIndex:CCFragmentInputIndexMatrix];//11.開始繪制[renderEncoder drawPrimitives:MTLPrimitiveTypeTrianglevertexStart:0vertexCount:self.numVertices];//12.結束編碼[renderEncoder endEncoding];//13.顯示[commandBuffer presentDrawable:view.currentDrawable];}//14.提交命令[commandBuffer commit];

4. 使用AVFundation將視頻文件存儲到CMSempleBuffer

在這里插入圖片描述
使用AVFundation中的AVAssetReaderTrackOutput方法,并將原始數據傳入到CMSempleBuffer中。CMSempleBuffer存儲的是每一幀的數據。

在ViewController中設置myAssetReader

//注意myAssetReader 支持MOV/MP4文件都可以//1.視頻文件路徑//NSURL *url = [[NSBundle mainBundle] URLForResource:@"my" withExtension:@"mov"];NSURL *url = [[NSBundle mainBundle] URLForResource:@"mingren" withExtension:@"mp4"];//2.初始化myAssetReaderself.reader = [[CCAssetReader alloc] initWithUrl:url];//3._textureCache的創建(通過CoreVideo提供給CPU/GPU高速緩存通道讀取紋理數據)CVMetalTextureCacheCreate(NULL, NULL, self.mtkView.device, NULL, &_textureCache);

在本案例中封裝成了一個類:

@implementation myAssetReader
{//軌道AVAssetReaderTrackOutput *readerVideoTrackOutput;//AVAssetReader可以從原始數據里獲取解碼后的音視頻數據AVAssetReader   *assetReader;//視頻地址NSURL *videoUrl;//鎖NSLock *lock;
}//初始化
- (instancetype)initWithUrl:(NSURL *)url{self = [super init];if(self != nil){videoUrl = url;lock = [[NSLock alloc]init];[self setUpAsset];}return self;
}//Asset 相關設置
-(void)setUpAsset{//AVURLAssetPreferPreciseDurationAndTimingKey 默認為NO,YES表示提供精確的時長NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];//1. 創建AVURLAsset 是AVAsset 子類,用于從本地/遠程URL初始化資源AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:videoUrl options:inputOptions];//2.異步加載資源//weakSelf 解決循環引用__weak typeof(self) weakSelf = self;//定義屬性名稱NSString *tracks = @"tracks";//對資源所需的鍵執行標準的異步載入操作,這樣就可以訪問資源的tracks屬性時,就不會受到阻礙.[inputAsset loadValuesAsynchronouslyForKeys:@[tracks] completionHandler: ^{//延長self 生命周期__strong typeof(self) strongSelf = weakSelf;//開辟子線程并發隊列異步函數來處理讀取的inputAsset
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{NSError *error = nil;//獲取狀態碼.AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:@"tracks" error:&error];//如果狀態不等于成功加載,則返回并打印錯誤信息if (tracksStatus != AVKeyValueStatusLoaded){NSLog(@"error %@", error);return;}//處理讀取的inputAsset[weakSelf processWithAsset:inputAsset];});}];}//處理獲取到的asset
- (void)processWithAsset:(AVAsset *)asset
{//鎖定[lock lock];NSLog(@"processWithAsset");NSError *error = nil;//1.創建AVAssetReaderassetReader = [AVAssetReader assetReaderWithAsset:asset error:&error];//2.kCVPixelBufferPixelFormatTypeKey 像素格式./*kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange : 420vkCVPixelFormatType_32BGRA : iOS在內部進行YUV至BGRA格式轉換*/NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];[outputSettings setObject:@(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) forKey:(id)kCVPixelBufferPixelFormatTypeKey];/*3. 設置readerVideoTrackOutputassetReaderTrackOutputWithTrack:(AVAssetTrack *)track outputSettings:(nullable NSDictionary<NSString *, id> *)outputSettings參數1: 表示讀取資源中什么信息參數2: 視頻參數*/readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];//alwaysCopiesSampleData : 表示緩存區的數據輸出之前是否會被復制.YES:輸出總是從緩存區提供復制的數據,你可以自由的修改這些緩存區數據readerVideoTrackOutput.alwaysCopiesSampleData = NO;//4.為assetReader 填充輸出[assetReader addOutput:readerVideoTrackOutput];//5.assetReader 開始讀取.并且判斷是否開始.if ([assetReader startReading] == NO){NSLog(@"Error reading from file at URL: %@", asset);}//取消鎖[lock unlock];
}//讀取Buffer 數據
- (CMSampleBufferRef)readBuffer {//鎖定[lock lock];CMSampleBufferRef sampleBufferRef = nil;//1.判斷readerVideoTrackOutput 是否創建成功.if (readerVideoTrackOutput) {//復制下一個緩存區的內容到sampleBufferRefsampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];}//2.判斷assetReader 并且status 是已經完成讀取 則重新清空readerVideoTrackOutput/assetReader.并重新初始化它們if (assetReader && assetReader.status == AVAssetReaderStatusCompleted) {NSLog(@"customInit");readerVideoTrackOutput = nil;assetReader = nil;[self setUpAsset];}//取消鎖[lock unlock];//3.返回讀取到的sampleBufferRef 數據return sampleBufferRef;
}@end

https://www.nshth.com/cplus/338433.html
>

相关文章:

  • 樹莓派的控制方法,第二篇 樹莓派基本外設基礎篇
  • 手機如何連接外設,iOS連接外設的幾種方式
  • switch可以外接鍵鼠嗎,別再給手機外接OTG鍵鼠玩刺激戰場了:其實還能這樣操作
  • [阿發你好]C/C++學習指南
  • 輸入法哪個最好用,wsl2中安裝中文輸入法
  • 字符串中引入變量方法,字符串處理、變量初始值處理、擴展的腳本技巧、正則表達式
  • 某計算機內存容量是512kb,某計算機主存容量為512kb,Cache容量為16kb,每塊有16個字,每字32位。 (1...
  • 中國工商網商標查詢,工商局爬蟲 商標網爬蟲
  • iOS真機調試TestFlight安裝及提交App Store審核教程
  • 蘋果app上架流程,小白如何在ios中安裝ios上架
  • 蘋果彈出提交表格是什么,蘋果TestFlight測試操作圖文教程(測試后提交App Store審核)
  • 四門外語傍身:外語,讓我的大學如此完美
  • D3D Surface/Texture SDL DDraw渲染視頻的區別和疑問
  • 手機VR播放器,Android VR Player(全景視頻播放器) [10]: VR全景視頻渲染播放的實現(exoplayer,glsurfaceview,o
  • Qt渲染視頻常見問題(視頻渲染窗口上子窗口設置透明出現陰影問題、主窗口縮放導致視頻渲染窗口部分出現視頻閃爍問題)
  • 視頻解析網站源碼,ijkplayer源碼分析 視頻渲染流程
  • 一分鐘的視頻渲染要多久,基礎教程|如何在數分鐘時間內渲染超清精美視頻?
  • Metal(六) 案例之視頻文件的渲染
  • flutter開發小程序,最強整理!寫給程序員的Flutter詳細教程,大廠直通車!
  • c++黑客編程揭秘與防范,C/C++截獲騰訊QQ網絡聊天系統內容和登錄密碼,教你做一個黑客!
  • 支付行業具體做什么,做支付需要了解哪些行業知識
  • 5大底層邏輯,淺談HyperLogLog底層算法邏輯
  • c++實現復數的加減乘除,【C++】輔助C++計算復數(代碼解釋的很清楚)
  • nlogn的算法有哪些,算法運行時間1、logN、N、NlogN 、N^2、N^3、2^n之間的比較
  • 開源圖片庫,幾種常用圖像處理開源庫簡介及使用總結
  • 圖像處理和計算機視覺,《圖像處理與計算機視覺算法及應用》讀后感
  • gps定位,側邊欄固定定位到版心兩側
  • css版心怎么設置,[css]版心和布局流程
  • 瀏覽器多個窗口怎么設置在一個頁面,網頁多種版心適應多屏幕技巧
  • 前端學習之版心和布局流程