设计 任务书 文档 开题 答辩 说明书 格式 模板 外文 翻译 范文 资料 作品 文献 课程 实习 指导 调研 下载 网络教育 计算机 网站 网页 小程序 商城 购物 订餐 电影 安卓 Android Html Html5 SSM SSH Python 爬虫 大数据 管理系统 图书 校园网 考试 选题 网络安全 推荐系统 机械 模具 夹具 自动化 数控 车床 汽车 故障 诊断 电机 建模 机械手 去壳机 千斤顶 变速器 减速器 图纸 电气 变电站 电子 Stm32 单片机 物联网 监控 密码锁 Plc 组态 控制 智能 Matlab 土木 建筑 结构 框架 教学楼 住宅楼 造价 施工 办公楼 给水 排水 桥梁 刚构桥 水利 重力坝 水库 采矿 环境 化工 固废 工厂 视觉传达 室内设计 产品设计 电子商务 物流 盈利 案例 分析 评估 报告 营销 报销 会计
 首 页 机械毕业设计 电子电气毕业设计 计算机毕业设计 土木工程毕业设计 视觉传达毕业设计 理工论文 文科论文 毕设资料 帮助中心 设计流程 
垫片
您现在所在的位置:首页 >>文科论文 >> 文章内容
                 
垫片
   我们提供全套毕业设计和毕业论文服务,联系微信号:biyezuopin QQ:2922748026   
What HMMs Can Do
文章来源:www.biyezuopin.vip   发布者:毕业作品网站  

Jeff Bilmes

bilmes@ee.washington.edu

Dept of EE, University of Washington

Seattle WA, 98195-2500

University of Washington, Dept. of EE, UWEETR-2002-0003

January 2002

Abstract

Since their inception over thirty years ago, hidden Markov models (HMMs) have have become the predominant methodology for automatic speech recognition (ASR) systems — today, most state-of-the-art speech systems are HMM-based. There have been a number of ways to explain HMMs and to list their capabilities, each of these ways having both advantages and disadvantages. In an effort to better understand what HMMs can do, this tutorial analyzes HMMs by exploring a novel way in which an HMM can be defined, namely in terms of random variables and conditional independence assumptions. We prefer this definition as it allows us to reason more   throughly about the capabilities of HMMs. In particular, it is possible to deduce that there are, in theory at least, no theoretical limitations to the class of probability distributions representable by HMMs. This paper concludes that, in search of a model

to supersede the HMM for ASR, we should rather than trying to correct for HMM limitations in the general case, new models should be found based on their potential for better parsimony, computational requirements, and noise insensitivity.

Introduction

By and large, automatic speech recognition (ASR) has been approached using statistical pattern classification [29, 24, 36], mathematical methodology readily available in 1968, and summarized as follows: given data presumably representing an unknown speech signal, a statistical model of one possible spoken utterance (out of a potentially very large set) is chosen that most probably explains this data. This requires, for each possible speech utterance, a model governing the set of likely acoustic conditions that could realize each utterance.

More than any other statistical technique, the Hidden Markov model (HMM) has been most successfully applied to the ASR problem. There have been many HMM tutorials [69, 18, 53]. In the widely read and now classic paper [86], an HMM is introduced as a collection of urns each containing a different proportion of colored balls. Sampling (generating data) from an HMM occurs by choosing a new urn based on only the previously chosen urn, and then choosing with replacement a ball from this new urn. The sequence of urn choices are not made public (and are said to be “hidden”) but the ball choices are known (and are said to be “observed”). Along this line of reasoning, an HMM can be defined in such a generative way, where one first generates a sequence of hidden (urn) choices, and then generates a sequence of observed (ball) choices.

For statistical speech recognition, one is not only worried in how HMMs generate data, but also, and more importantly, in an HMMs distributions over observations, and how those distributions for different utterances compare with each other. An alternative view of HMMs, therefore and as presented in this paper, can provide additional insight into what the capabilities of HMMs are, both in how they generate data and in how they might recognize and distinguish between patterns.

This paper therefore provides an up-to-date HMM tutorial. It gives a precise HMM definition, where an HMM is defined as a variable-size collection of random variables with an appropriate set of conditional independence properties. In an effort to better understand what HMMs can do, this paper also considers a list of properties, and discusses how they each might or might not apply to an HMM. In particular, it will be argued that, at least within the paradigm offered by statistical pattern classification [29, 36], there is no general theoretical limit to HMMs given enough hidden states, rich enough observation distributions, sufficient training data, adequate computation, and appropriate training algorithms. Instead, only a particular individual HMM used in a speech recognition system might be inadequate. This perhaps provides a reason for the continual speech-recognition accuracy improvements we have seen with HMM-based systems, and for the difficulty there has been in producing a model to supersede HMMs.

This paper does not argue, however, that HMMs should be the final technology for speech recognition. On the contrary, a main hope of this paper is to offer a better understanding of what HMMs can do, and consequently, a better understanding of their limitations so they may ultimately be abandoned in favor of a superior model. Indeed, HMMs are extremely flexible and might remain the preferred ASR method for quite some time. For speech recognition research, however, a main thrust should be searching for inherently more parsimonious models, ones that incorporate only the distinct properties of speech utterances relative to competing speech utterances. This later property is termed structural discriminability [8], and refers to a generative model’s inherent inability to represent the properties of data common to every class, even when trained using a maximum likelihood parameter estimation procedure. This means that even if a generative model only poorly represents speech, leading to low probability scores, it may still properly classify different speech utterances. These models are to be called discriminative generative models.

Section 2 reviews random variables, conditional independence, and graphical models (Section 2.1), stochastic processes (Section 2.2), and discrete-time Markov chains (Section 2.3). Section 3 provides a formal definition of an HMM, that has both a generative and an “acceptive” point of view. Section 4 compiles a list of properties, and discusses how they might or might not apply to HMMs. Section 5 derives conditions for HMM accuracy in a KullbackLeibler distance sense, proving a lower bound on the necessary number of hidden states. The section derives sufficient conditions as well. Section 6 reviews several alternatives to HMMs, and concludes by presenting an intuitive criterion one might use when researching HMM alternatives

  全套毕业设计论文现成成品资料请咨询微信号:biyezuopin QQ:2922748026     返回首页 如转载请注明来源于www.biyezuopin.vip  

                 

打印本页 | 关闭窗口
本类最新文章
The Honest Guide Sonar Target Cla Process Planning
Research on the Sustainable Land UniCycle: An And
| 关于我们 | 友情链接 | 毕业设计招聘 |

Email:biyeshejiba@163.com 微信号:biyezuopin QQ:2922748026  
本站毕业设计毕业论文资料均属原创者所有,仅供学习交流之用,请勿转载并做其他非法用途.如有侵犯您的版权有损您的利益,请联系我们会立即改正或删除有关内容!