前言:小白第一次接入科大讯飞语音听写,接入还是比较简单的,先看效果图无UI界面
Demo地址拿去
效果图有两部分,一是接入科大讯飞语音听写功能,可以实现将语音转换成文字。 二是看到的语音音量动画效果,为了更加形象。
接入科大讯飞第一步需要自己去科大讯飞开发者中心申请应用,只有应用申请成功才能获取到AppID才可以正常接入语音听写,接入部分大家可以去科大讯飞开发者中心去看开发文档,这里主要讲一下运用到工程中
1.获取APPID, 添加服务,下载对应的SDK,拿到SDK里边的iflyMSC.framework,放入自己创建好的工程中
ED4F6CD0-00EC-442A-A820-9AAA6FE580EA.png
2.添加对应的framework依赖
C7639B3F-A060-4997-9253-B3C21031FA0E.png
正常到这里,接入讯飞语音基本完成,下边是代码部分
AppDelegate里边按着开发者文档走
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
self.window = [[UIWindow alloc]initWithFrame:[UIScreen mainScreen].bounds];
self.window.backgroundColor = [UIColor whiteColor];
ViewController * vc = [[ViewController alloc]init];
UINavigationController * nav = [[UINavigationController alloc]initWithRootViewController:vc];
self.window.rootViewController = nav;
[self.window makeKeyAndVisible];
[self createOtherFrameWork];
return YES;
}
// 初始化讯飞
- (void)createOtherFrameWork{
//设置sdk的log等级,log保存在下面设置的工作路径中
[IFlySetting setLogFile:LVL_ALL];
//打开输出在console的log开关
[IFlySetting showLogcat:YES];
//设置sdk的工作路径
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
NSString *cachePath = [paths objectAtIndex:0];
[IFlySetting setLogFilePath:cachePath];
NSString *initString = [[NSString alloc] initWithFormat:@"appid=%@", APPID];
[IFlySpeechUtility createUtility:initString];
}
- (BOOL)application:(UIApplication *)application openURL:(NSURL *)url sourceApplication:(NSString *)sourceApplication annotation:(id)annotation{
[[IFlySpeechUtility getUtility] handleOpenURL:url];
return YES;
}
ViewController.m里边写入正常的无UI添加方法
@property (nonatomic, strong) IFlySpeechRecognizer *iFlySpeechRecognizer;
创建语音识别对象
- (void)MSCSpeech{
//创建语音识别对象
_iFlySpeechRecognizer = [IFlySpeechRecognizer sharedInstance];
//设置识别参数
//设置为听写模式
[_iFlySpeechRecognizer setParameter: @"iat" forKey: [IFlySpeechConstant IFLY_DOMAIN]];
//asr_audio_path 是录音文件名,设置value为nil或者为空取消保存,默认保存目录在Library/cache下。
[_iFlySpeechRecognizer setParameter:@"iat.pcm" forKey:[IFlySpeechConstant ASR_AUDIO_PATH]];
//去掉逗号
[_iFlySpeechRecognizer setParameter:@"0" forKey:[IFlySpeechConstant ASR_PTT]];
[_iFlySpeechRecognizer setDelegate: self];
[self.view addSubview:self.speechButton];
[self.view addSubview:self.speechLabel];
//[self.view addSubview:self.audioStreamButton];
//启动识别服务
// [_iFlySpeechRecognizer startListening];
if (_speechView == nil) {
_speechView = [MSCVoiceRecordToastContentView new];
_speechView.center = CGPointMake(self.view.frame.size.width/2, self.speechButton.frame.origin.y-100);
_speechView.bounds = CGRectMake(0, 0, 120, 120);
_speechView.backgroundColor = [[UIColor blackColor]colorWithAlphaComponent:0.5];
_speechView.layer.cornerRadius = 6;
_speechView.hidden = YES;
[self.view addSubview:_speechView];
}
}
当然声明代理,同时科大讯飞语音delegate的几个回调方法也要写出来
//IFlySpeechRecognizerDelegate协议实现
//识别结果返回代理
- (void) onResults:(NSArray *) results isLast:(BOOL)isLast{
NSMutableString *resultString = [[NSMutableString alloc] init];
NSDictionary *dic = results[0];
for (NSString *key in dic) {
[resultString appendFormat:@"%@",key];
}
NSString * resultFromJson = [MSCHelper stringFromJson:resultString];
self.speechLabel.text = [NSString stringWithFormat:@"%@%@", self.speechLabel.text,resultFromJson];
// NSLog(@"当前获取的数据 : %@",resultFromJson);
if ([_iFlySpeechRecognizer isListening]) {
NSLog(@"正在识别");
}
}
//识别会话结束返回代理
- (void)onError: (IFlySpeechError *) error{
NSLog(@"错误原因 : %@",error.errorDesc);
if (self.speechButton.selected) {
[_iFlySpeechRecognizer startListening];
}else{
[_iFlySpeechRecognizer cancel];
}
}
//停止录音回调
- (void) onEndOfSpeech{
NSLog(@"结束录音");
}
//开始录音回调
- (void) onBeginOfSpeech{
NSLog(@"开始录音");
}
//音量回调函数
- (void) onVolumeChanged: (int)volume{
[self.speechView updateWithPower:volume];
}
//会话取消回调
- (void) onCancel{
NSLog(@"取消了当前会话");
}
这里科大讯飞语音听写功能就接入完毕了,可以尝试运行程序了。但是我个人感觉看着没有UI界面的心里难受。所以自己写了一个语音音量动画,包括定时器。使用方法比较简单,效果就是上边看到的效果图。这两天网络比较差,翻墙老是被卡住,有时间把Demo给大家放出来,或者有需要的在下方给我留言,