As many may have noticed, California is getting drenched this year. After 5 years of drought, the floodgates are opening, and we are getting inundated. Over a single day we got about 6 inches of rain; I had to drain the pool twice. While the effects of the drought are still being mitigated, for now it is a great sign.
In many ways, 2017 is also shaping up as the year of the real-time web deluge. In just the first two months, the number of new real-time solutions that are coming out is amazing, and the deluge seems to extend on into the future.
For example, Cisco (News - Alert) rolled out its Spark Board, a new table-like large touchscreen for huddle and conference rooms. It is an extension of Spark and uses WebRTC. Avaya launched Zang Office, a cloud UCaaS platform, again based on WebRTC. And the new Chime communications service from Amazon is based on WebRTC as well. While it uses VP8, the WebRTC core library and Opus, it is implemented as an app vs. in the browser. Also, Salesforce is offering some new communications capabilities, again based on WebRTC.
Clearly, WebRTC is getting real traction in the UC and UCaaS world. In the broader community, several new remote telehealth applications have emerged, powered by platforms like Kandy and Temasys (News - Alert) and using WebRTC. Houseparty, a group video chat app that is exploding with young people, is based on WebRTC, initially supported by Tokbox. And the path is further accelerated as Microsoft (News - Alert) recently announced that Edge will be getting WebRTC 1.0 capabilities in 2017, albeit with some limitations.
At the recent ITEXPO, in the concurrent Real Time Web Solutions Event, I had a chance to talk to a number of leaders in both the UC and app spaces about how the emergence of the real-time web is changing the way people approach communications. The unanimous conclusion is that users are rapidly getting used to and prefer communications that are contextual to the application or other factors that drive it. While no one would anticipate the end of the PSTN, there is strong agreement that more and more of our communications time will be spent in communications integrated into apps and using real-time web protocols and implementations.
While it is clear that this leads to a need to consider whether real time fits into your company’s apps, for the traditional IT and telecom groups there will be impacts as well. As the real-time web explodes, your users will want to participate in events from within your firewall. This means that the security team needs to decide how to let real-time web traffic enter the organization; otherwise, your employees may be left joining meetings by phone when the other participants are on full video using screen sharing. Or your users may decide to use the cellular network instead of your Wi-Fi, resulting in higher cellular charges. As WebRTC is generally encrypted end to end, how to capture and analyze these conversations may be an issue, especially if your industry requires such tracking.
Another significant area is the video-enabled conference room. Ninety five percent of conference and huddle rooms do not have video. So understanding how those rooms, once video-equipped, will participate in open app-driven real-time web events is critical. It will be a real problem to find you just spent $10,000 to equip a room with video and the exec wants to use it to join a Slack WebRTC conference to get a big business deal.
The challenge is that most conference rooms limit you to either joining the vendor’s video system or maybe to a cloud system or gateway to other video systems. However, most (none to my knowledge) video room systems do not have a mechanism through which a user can enter the room and join an open WebRTC URL to join an app-driven meeting.
For many organizations, this may lead to an interesting option for smaller conference and huddle rooms: bring you own processor. In a BYOP implementation, the room includes a camera, a tabletop speakerphone, and a wall-mounted display. The camera and speakerphone are connected to the user’s PC using USB and the display using HDMI. Essentially the user’s PC is brought into the room as the control point/encoding/decoding unit for the conversation. For the user, joining a meeting is the same as at the desk, but the room peripherals provide the group experience for multiple people in the room.
Edited by Alicia Young