www.qjdy.com-奇迹赌场 > www.qjdy.com > 说到websocket想比大家不会陌生

原标题:说到websocket想比大家不会陌生

浏览次数:138 时间:2019-09-26

websocket探求其与话音、图片的技艺

2015/12/26 · JavaScript · 3 评论 · websocket

原稿出处: AlloyTeam   

说起websocket想比大家不会素不相识,假设素不相识的话也没提到,一句话总结

“WebSocket protocol 是HTML5一种新的磋商。它达成了浏览器与服务器全双工通讯”

WebSocket相比较古板这些服务器推本领几乎好了太多,我们得以挥手向comet和长轮询那些技能说拜拜啦,庆幸我们生活在具有HTML5的不时~

那篇小说大家将分三部分索求websocket

先是是websocket的宽广使用,其次是一心自身构建服务器端websocket,最终是不可或缺介绍利用websocket制作的四个demo,传输图片和在线语音聊天室,let’s go

一、websocket常见用法

这边介绍三种自个儿感觉大范围的websocket完成……(专心:本文创立在node上下文意况

1、socket.io

先给demo

JavaScript

var http = require('http'); var io = require('socket.io'); var server = http.createServer(function(req, res) { res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'}); res.end(); }).listen(8888); var socket =.io.listen(server); socket.sockets.on('connection', function(socket) { socket.emit('xxx', {options}); socket.on('xxx', function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require('http');
var io = require('socket.io');
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on('connection', function(socket) {
    socket.emit('xxx', {options});
 
    socket.on('xxx', function(data) {
        // do someting
    });
});

信赖精晓websocket的同窗不也许不明了socket.io,因为socket.io太有名了,也很棒,它本身对过期、握手等都做了管理。作者臆想那也是落到实处websocket使用最多的方法。socket.io最最最美貌的一点就是优雅降级,当浏览器不接济websocket时,它会在在那之中优雅降级为长轮询等,客户和开辟者是无需关注具体达成的,很实惠。

可是工作是有两面性的,socket.io因为它的周详也拉动了坑的地点,最重大的便是臃肿,它的包裹也给多少带动了比较多的报道冗余,而且优雅降级这一独到之处,也陪同浏览器标准化的张开稳步失去了了不起

Chrome Supported in version 4
Firefox Supported in version 4
Internet Explorer Supported in version 10
Opera Supported in version 10
Safari Supported in version 5

在此地不是指斥说socket.io不佳,已经被淘汰了,而是一时候大家也足以虚拟部分别的的实现~

 

2、http模块

正好说了socket.io臃肿,那以后就来讲说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer(); server.on(‘upgrade’, function(req) { console.log(req.headers); }); server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

很简单的完结,其实socket.io内部对websocket也是那样达成的,可是后边帮大家封装了一部分handle管理,这里大家也能够团结去足够,给出两张socket.io中的源码图

图片 1

图片 2

 

3、ws模块

末端有个例子会用到,这里就提一下,前边具体看~

 

二、自个儿实现一套server端websocket

无独有偶说了三种普及的websocket完成情势,今后我们思量,对于开采者来讲

websocket相对于古板http数据交互情势以来,增添了服务器推送的风浪,客商端接收到事件再扩充对应管理,开辟起来区别并非太大啊

那是因为这么些模块已经帮大家将数码帧解析此处的坑都填好了,第二部分大家将尝试本身创制一套简便的服务器端websocket模块

谢谢次碳酸钴的切磋支持,自己在此地这一部分只是简短说下,假使对此有野趣好奇的请百度【web手艺琢磨所】

团结产生服务器端websocket主要有两点,二个是运用net模块接受数据流,还应该有一个是相对来说官方的帧结构图深入分析数据,完结这两部分就早就做到了整整的平底专门的学问

第一给八个客商端发送websocket握手报文的抓包内容

客户端代码非常粗略

JavaScript

ws = new WebSocket("ws://127.0.0.1:8888");

1
ws = new WebSocket("ws://127.0.0.1:8888");

图片 3

劳务器端要指向这一个key验证,就是讲key加上三个一定的字符串后做一次sha1运算,将其结果转变为base64送回到

JavaScript

var crypto = require('crypto'); var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; require('net').createServer(function(o) { var key; o.on('data',function(e) { if(!key) { // 获取发送过来的KEY key = e.toString().match(/Sec-WebSocket-Key: (. )/)[1]; // 连接上WS那么些字符串,并做三遍sha1运算,最后转变到Base64 key = crypto.createHash('sha1').update(key WS).digest('base64'); // 输出再次来到给客户端的数目,那么些字段都以必需的 o.write('HTTP/1.1 101 Switching Protocolsrn'); o.write('Upgrade: websocketrn'); o.write('Connection: Upgradern'); // 那几个字段带上服务器管理后的KEY o.write('Sec-WebSocket-Accept: ' key 'rn'); // 输出空行,使HTTP头停止 o.write('rn'); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require('crypto');
var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';
 
require('net').createServer(function(o) {
var key;
o.on('data',function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (. )/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash('sha1').update(key WS).digest('base64');
// 输出返回给客户端的数据,这些字段都是必须的
o.write('HTTP/1.1 101 Switching Protocolsrn');
o.write('Upgrade: websocketrn');
o.write('Connection: Upgradern');
// 这个字段带上服务器处理后的KEY
o.write('Sec-WebSocket-Accept: ' key 'rn');
// 输出空行,使HTTP头结束
o.write('rn');
}
});
}).listen(8888);

如此握手部分就已经到位了,前边就是多少帧深入分析与转换的活了

先看下官方提供的帧结构暗意图

图片 4

归纳介绍下

FIN为是不是终止的标识

汉兰达SV为留下空间,0

opcode标志数据类型,是还是不是分片,是还是不是二进制剖判,心跳包等等

付给一张opcode对应图

图片 5

MASK是或不是利用掩码

Payload len和前面extend payload length表示数据长度,这些是最费劲的

PayloadLen唯有7位,换来无符号整型的话唯有0到127的取值,这么小的数值当然无法描述非常大的数码,因而鲜明当数码长度小于或等于125时候它才作为数据长度的陈说,借使这一个值为126,则时候背后的多个字节来存储数据长度,假如为127则用前边七个字节来储存数据长度

Masking-key掩码

上面贴出深入分析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i] >> 7, Opcode: e[i ] & 15, Mask: e[i] >> 7, PayloadLength: e[i ] & 0x7F }; if(frame.PayloadLength === 126) { frame.PayloadLength = (e[i ] << 8) e[i ]; } if(frame.PayloadLength === 127) { i = 4; frame.PayloadLength = (e[i ] << 24) (e[i ] << 16) (e[i ] << 8)

  • e[i ]; } if(frame.Mask) { frame.MaskingKey = [e[i ], e[i ], e[i ], e[i ]]; for(j = 0, s = []; j < frame.PayloadLength; j ) { s.push(e[i j] ^ frame.MaskingKey[j%4]); } } else { s = e.slice(i, i frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i ] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i ] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i ] << 8) e[i ];
}
 
if(frame.PayloadLength === 127) {
i = 4;
frame.PayloadLength = (e[i ] << 24) (e[i ] << 16) (e[i ] << 8) e[i ];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i ], e[i ], e[i ], e[i ]];
 
for(j = 0, s = []; j < frame.PayloadLength; j ) {
s.push(e[i j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

下一场是转变数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以依据帧结构暗中表示图上的去管理,在此处不细讲,作品主要在下局部,假使对那块感兴趣的话能够移动web技能切磋所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇作品最要紧的可能显得一下websocket的有个别应用处境

1、传输图片

咱俩先考虑传输图片的步骤是哪些,首先服务器收到到顾客端央求,然后读取图片文件,将二进制数据转载给客户端,顾客端如哪处理?当然是选取FileReader对象了

先给客商端代码

JavaScript

var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888"); ws.onopen = function(){ console.log("握手成功"); }; ws.onmessage = function(e) { var reader = new File里德r(); reader.onload = function(event) { var contents = event.target.result; var a = new Image(); a.src = contents; document.body.appendChild(a); } reader.readAsDataU汉兰达L(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

接受到音讯,然后readAsDataUMercedes-AMGL,直接将图纸base64增加到页面中

转到服务器端代码

JavaScript

fs.readdir("skyland", function(err, files) { if(err) { throw err; } for(var i = 0; i < files.length; i ) { fs.readFile('skyland/' files[i], function(err, data) { if(err) { throw err; } o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) { var s = [], l = buf.length, ret = []; s.push((1 << 7) 2); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i ) {
fs.readFile('skyland/' files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) 2)这一句,这里格外直接把opcode写死了为2,对于Binary Frame,那样客商端接收到多少是不会尝试举办toString的,不然会报错~

代码很轻巧,在此间向大家分享一下websocket传输图片的快慢如何

测量试验相当多张图纸,总共8.24M

平时性静态财富服务器须求20s左右(服务器较远)

cdn需要2.8s左右

那大家的websocket格局啊??!

答案是均等要求20s左右,是或不是很失望……速度就是慢在传输上,并非服务器读取图片,本机上同一的图片能源,1s左右足以产生……那样看来数据流也敬敏不谢冲破距离的限定升高传输速度

上边我们来看看websocket的另一个用法~

 

用websocket搭建语音聊天室

先来照料一下口音聊天室的效力

顾客步入频道随后从迈克风输入音频,然后发送给后台转载给频道里面包车型地铁别的人,其余人接收到新闻进行播报

看起来困难在五个位置,第贰个是音频的输入,第二是接到到数码流进行播放

先说音频的输入,这里运用了HTML5的getUserMedia方法,不过注意了,那个方法上线是有波罗輋的,最后说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); recorder = rec; }) }

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

第二个参数是{audio: true},只启用音频,然后创立了三个SRecorder对象,后续的操作基本上都在那个指标上拓宽。此时若是代码运转在该地的话浏览器应该提拔您是或不是启用迈克风输入,明确之后就开发银行了

接下去大家看下SRecorder构造函数是吗,给出主要的有的

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪(Audi)oContext是三个旋律上下文对象,有做过声音过滤管理的同班应该精晓“一段音频达到扬声器举行广播在此以前,半路对其开展拦阻,于是大家就收获了旋律数据了,那些拦截专门的学业是由window.奥迪(Audi)oContext来做的,大家具有对旋律的操作都基于这几个指标”,我们能够透过奥迪oContext成立区别的奥迪(Audi)oNode节点,然后增添滤镜播放非常的鸣响

录音原理同样,大家也必要走奥迪oContext,但是多了一步对迈克风音频输入的抽出上,并不是像往常管理音频一下用ajax央求音频的ArrayBuffer对象再decode,Mike风的接受须求用到createMediaStreamSource方法,注意那一个参数正是getUserMedia方法第贰个参数的参数

再说createScriptProcessor方法,它官方的表达是:

Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.

——————

富含下正是那几个办法是使用JavaScript去管理音频搜聚操作

算是到点子收集了!胜利就在头里!

接下去让我们把话筒的输入和音频搜聚相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方表明如下

The destination property of the AudioContext interface returns an AudioDestinationNoderepresenting the final destination of all audio in the context.

——————

context.destination重回代表在情形中的音频的终极指标地。

好,到了这儿,大家还必要一个监听音频收罗的风云

JavaScript

recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是一个对象,这几个是在英特网找的,笔者就加了三个clear方法因为背后会用到,首要有特别encodeWAV方法绝对的赞,别人进行了往往的韵律压缩和优化,这么些最后会陪伴完整的代码一齐贴出来

此刻全数客户进入频道随后从迈克风输入音频环节就早就做到啦,上面就该是向劳动器端发送音频流,稍微有一点蛋疼的来了,刚才我们说了,websocket通过opcode分化能够代表回去的数额是文件依旧二进制数据,而小编辈onaudioprocess中input进去的是数组,最后播放声音必要的是Blob,{type: ‘audio/wav’}的靶子,那样大家就非得要在出殡和埋葬在此之前将数组转形成WAV的Blob,此时就用到了地点说的encodeWAV方法

服务器就像是相当粗略,只要转载就行了

本地质衡量试确实能够,然而天坑来了!将顺序跑在服务器上时候调用getUserMedia方法提醒笔者必需在三个广安的景况,也正是急需https,那代表ws也不可能不换到wss……由此服务器代码就不曾选用大家温馨包装的抓手、深入分析和编码了,代码如下

JavaScript

var https = require('https'); var fs = require('fs'); var ws = require('ws'); var userMap = Object.create(null); var options = { key: fs.readFileSync('./privatekey.pem'), cert: fs.readFileSync('./certificate.pem') }; var server = https.createServer(options, function(req, res) { res.writeHead({ 'Content-Type' : 'text/html' }); fs.readFile('./testaudio.html', function(err, data) { if(err) { return ; } res.end(data); }); }); var wss = new ws.Server({server: server}); wss.on('connection', function(o) { o.on('message', function(message) { if(message.indexOf('user') === 0) { var user = message.split(':')[1]; userMap[user] = o; } else { for(var u in userMap) { userMap[u].send(message); } } }); }); server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require('https');
var fs = require('fs');
var ws = require('ws');
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync('./privatekey.pem'),
    cert: fs.readFileSync('./certificate.pem')
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        'Content-Type' : 'text/html'
    });
 
    fs.readFile('./testaudio.html', function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on('connection', function(o) {
    o.on('message', function(message) {
if(message.indexOf('user') === 0) {
    var user = message.split(':')[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依旧很轻便的,使用https模块,然后用了始于说的ws模块,userMap是仿照的频道,只兑现转发的为主职能

应用ws模块是因为它优秀https达成wss实在是太有利了,和逻辑代码0争论

https的搭建在此间就不提了,重如果急需私钥、CS奥迪Q3证书具名和证书文件,感兴趣的同班能够了然下(不过不掌握的话在现网情形也用持续getUserMedia……)

上边是一体化的前端代码

JavaScript

var a = document.getElementById('a'); var b = document.getElementById('b'); var c = document.getElementById('c'); navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia; var gRecorder = null; var audio = document.querySelector('audio'); var door = false; var ws = null; b.onclick = function() { if(a.value === '') { alert('请输入客户名'); return false; } if(!navigator.getUserMedia) { alert('抱歉您的设备无斯洛伐克共和国(The Slovak Republic)语音聊天'); return false; } SRecorder.get(function (rec) { gRecorder = rec; }); ws = new WebSocket("wss://x.x.x.x:8888"); ws.onopen = function() { console.log('握手成功'); ws.send('user:' a.value); }; ws.onmessage = function(e) { receive(e.data); }; document.onkeydown = function(e) { if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } } }; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) { ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door = false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var SRecorder = function(stream) { config = {}; config.sampleBits = config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100 / 6); var context = new 奥迪(Audi)oContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0 //录音文件长度 , buffer: [] //录音缓存 , inputSampleRate: context.sampleRate //输入采集样品率 , input萨姆pleBits: 16 //输入采集样品数位 8, 16 , output萨姆pleRate: config.sampleRate //输出采集样品率 , oututSampleBits: config.sampleBits //输出采集样品数位 8, 16 , clear: function() { this.buffer = []; this.size = 0; } , input: function (data) { this.buffer.push(new Float32Array(data)); this.size = data.length; } , compress: function () { //合併压缩 //合併 var data = new Float32Array(this.size); var offset = 0; for (var i = 0; i < this.buffer.length; i ) { data.set(this.buffer[i], offset); offset = this.buffer[i].length; } //压缩 var compression = parseInt(this.inputSampleRate / this.outputSampleRate); var length = data.length / compression; var result = new Float32Array(length); var index = 0, j = 0; while (index < length) { result[index] = data[j]; j = compression; index ; } return result; } , encodeWAV: function () { var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits); var bytes = this.compress(); var dataLength = bytes.length * (sampleBits / 8); var buffer = new ArrayBuffer(44 dataLength); var data = new DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var writeString = function (str) { for (var i = 0; i < str.length; i ) { data.setUint8(offset i, str.charCodeAt(i)); } }; // 财富交流文件标志符 writeString('EscortIFF'); offset = 4; // 下个地方开首到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 dataLength, true); offset = 4; // WAV文件证明 writeString('WAVE'); offset = 4; // 波形格式标记 writeString('fmt '); offset = 4; // 过滤字节,日常为 0x10 = 16 data.setUint32(offset, 16, true); offset = 4; // 格式连串 (PCM情势采集样品数据) data.setUint16(offset, 1, true); offset = 2; // 通道数 data.setUint16(offset, channelCount, true); offset = 2; // 采集样品率,每秒样本数,表示各个通道的广播速度 data.setUint32(offset, sampleRate, true); offset = 4; // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset = 4; // 快数据调节数 采样一遍占用字节数 单声道×每样本的数额位数/8 data.setUint16(offset, channelCount * (sampleBits / 8), true); offset = 2; // 每样本数量位数 data.setUint16(offset, sampleBits, true); offset = 2; // 数据标志符 writeString('data'); offset = 4; // 采集样品数据总量,即数据总大小-44 data.setUint32(offset, dataLength, true); offset = 4; // 写入采集样品数据 if (sampleBits === 8) { for (var i = 0; i < bytes.length; i , offset ) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s < 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val 32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i < bytes.length; i , offset = 2) { var s = Math.max(-1, Math.min(1, bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } return new Blob([data], { type: 'audio/wav' }); } }; this.start = function () { audioInput.connect(recorder); recorder.connect(context.destination); } this.stop = function () { recorder.disconnect(); } this.getBlob = function () { return audioData.encodeWAV(); } this.clear = function() { audioData.clear(); } recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get = function (callback) { if (callback) { if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); callback(rec); }) } } } function receive(e) { audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById('a');
var b = document.getElementById('b');
var c = document.getElementById('c');
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector('audio');
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === '') {
        alert('请输入用户名');
        return false;
    }
    if(!navigator.getUserMedia) {
        alert('抱歉您的设备无法语音聊天');
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log('握手成功');
        ws.send('user:' a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size = data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i ) {
                data.set(this.buffer[i], offset);
                offset = this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j = compression;
                index ;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i ) {
                    data.setUint8(offset i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString('RIFF'); offset = 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 dataLength, true); offset = 4;
            // WAV文件标志
            writeString('WAVE'); offset = 4;
            // 波形格式标志
            writeString('fmt '); offset = 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset = 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset = 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset = 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset = 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset = 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset = 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset = 2;
            // 数据标识符
            writeString('data'); offset = 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset = 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i , offset ) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i , offset = 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: 'audio/wav' });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

友善有品味不开关实时对讲,通过setInterval发送,但意识杂音有一点点重,效果不佳,这几个须要encodeWAV再一层的卷入,多去除处境杂音的作用,本身选择了尤其便捷的开关说话的方式

 

那篇文章里首先展望了websocket的前途,然后根据标准我们本身尝尝分析和转移数据帧,对websocket有了更加深一步的垂询

最终通过三个demo见到了websocket的潜在的力量,关于语音聊天室的demo涉及的较广,未有接触过奥迪oContext对象的同桌最棒先理解下奥迪oContext

作品到此处就得了啦~有如何主张和主题材料迎接大家建议来一同谈谈查究~

 

1 赞 11 收藏 3 评论

图片 6

本文由www.qjdy.com-奇迹赌场发布于www.qjdy.com,转载请注明出处:说到websocket想比大家不会陌生

关键词: JavaScript

上一篇:   译文出处

下一篇:没有了