设计 任务书 文档 开题 答辩 说明书 格式 模板 外文 翻译 范文 资料 作品 文献 课程 实习 指导 调研 下载 网络教育 计算机 网站 网页 小程序 商城 购物 订餐 电影 安卓 Android Html Html5 SSM SSH Python 爬虫 大数据 管理系统 图书 校园网 考试 选题 网络安全 推荐系统 机械 模具 夹具 自动化 数控 车床 汽车 故障 诊断 电机 建模 机械手 去壳机 千斤顶 变速器 减速器 图纸 电气 变电站 电子 Stm32 单片机 物联网 监控 密码锁 Plc 组态 控制 智能 Matlab 土木 建筑 结构 框架 教学楼 住宅楼 造价 施工 办公楼 给水 排水 桥梁 刚构桥 水利 重力坝 水库 采矿 环境 化工 固废 工厂 视觉传达 室内设计 产品设计 电子商务 物流 盈利 案例 分析 评估 报告 营销 报销 会计
 首 页 机械毕业设计 电子电气毕业设计 计算机毕业设计 土木工程毕业设计 视觉传达毕业设计 理工论文 文科论文 毕设资料 帮助中心 设计流程 
垫片
您现在所在的位置:首页 >>文科论文 >> 文章内容
                 
垫片
   我们提供全套毕业设计和毕业论文服务,联系微信号:biyezuopin QQ:2922748026   
The Real-Time Specification for Java
文章来源:www.biyezuopin.vip   发布者:毕业作品网站  
The Real-Time Specification for Java
New languages, programming disciplines, operating systems, and software engineering techniques sometimes hold considerable potential for real-time software developers. A promising area of interest—but one fairly new to the real-time community—is object-oriented programming.  
Java, for example, draws heavily from object orientation and is highly suitable for extension to real-time and embedded systems. Recognizing this fit between Java and real-time software development, the Real-Time for Java Experts Group (RTJEG) began developing the real-time specification for Java (RTSJ)1 in March 1999 under the Java Community Process.
The goal of the RTJEG, of which we are both members, was to provide a platform—a Java execution environment and application program interface (API)—that lets programmers correctly reason about the temporal behavior of executing software. Programmers who write real-time systems applications must be able to determine a priori when certain logic will execute and that it will complete its execution before some deadline. This predictability is important in many real-time system applications, such as aircraft control systems, military command and control systems, industrial automation systems, transportation, and telephone switches. We began our effort by looking at the Java language specification3 and the Java virtual machine specification.4 For each feature of the language or runtime code, we asked if the stated semantics would let a programmer determine with reasonable effort and before execution the temporal behavior of the feature (or of the things the feature controls) during execution. We idenThe RTSJ provides a platform that will let programmers correctly reason about the temporal behavior of executing software. Two members of the Real-Time for Java Experts Group explain the RTSJ’s features and the thinking behind the specification’s design. Greg Bollella IBM James Gosling Sun Microsystems tified three features—scheduling, memory management, and synchronization—that did not allow such determination. Requirements defined in a workshop sponsored by the National Institute of Science and Technology (NIST) combined with input from other industry organizations helped us identify additional features and semantics. We decided to include four of these additional features: asynchronous event handling, asynchronous control transfer, asynchronous thread termination, and access to physical memory.
These plus the three features from our review of the Java and JVM specifications gave us seven main areas for the RTSJ. We believe these seven areas provide a real-time software development platform suitable for a wide range of applications. The “How Well Does the RTSJ Meet Its Goals?” sidebar gives more background on the requirements and principles that drove RTSJ development. The “RTSJ Timetable” sidebar lists important milestones and contacts for those interested in reviewing the draft specification. The RTSJ completed public review in February 2000, but the Java Community Process mandates that we not finalize the specification until we complete the reference implementation and test suites. As the community gains experience with the RTSJ, small changes to the specification may occur. Our goal in writing this article is to provide insights into RTSJ’s design rationale—particularly how the NIST requirements and other goals affected it—and to show how this important specification fits into the overall strategic direction for defining an object-oriented programming system optimized for real-time systems development.
  SCHEDULING The Java specification provides only broad guidance for scheduling: When there is competition for processing resources, threads with higher priority are generally executed in preference to threads with lower priority. Such preference is not, however, a guarantee that the highest priority thread will always be running, and thread priorities cannot be used to reliably implement mutual exclusion. Obviously, if you are writing software with temporal constraints, you need a stronger semantic statement about the order in which threads should execute. The typical way to define these semantics is through algorithms that determine how to choose the next thread for execution. The RTSJ specifies a minimum scheduling algorithm, which must be in all RTSJ implementations. Minimum requirements At the very least, all implementations must provide a fixed-priority preemptive dispatcher with no fewer than 28 unique priorities. “Fixed-priority” means that the system does not change thread priority (for example, by aging). There is one exception: The system can change thread priorities as part of executing the priority inversion avoidance algorithm (which preserves priority inheritance). Without exception, threads can change their own or another thread’s priorities.
  The application program must see the minimum 28 priorities as unique; for example, it must know that a thread with a lower priority will never execute if a thread with a higher priority is ready. Thus, you cannot simplistically map 28 priorities into, say, a smaller number because of the underlying system. You can, however, implement the RTSJ on a platform that provides fewer than 28 native priorities. It would then be up to you to use whatever means available to provide the appearance of uniqueness. Why 28 priorities? We chose this number because real-time scheduling theory indicates that you can expect close to optimal schedulability with 32 priorities.5 We chose to leave four priorities for other tasks because the RTSJ will likely be only a part of the system, and some systems have only 32 native priorities. You can, of course, add scheduling algorithms to this minimum requirement. The RTSJ provides three classes—Scheduler, SchedulingParameters, and ReleaseParameters—and subclasses of these that encapsulate temporal requirements. These classes are bound to schedulable objects (threads or event handlers). The required scheduler, an instance of the class PriorityScheduler, uses the values set in these parameter objects.
  Thread creation The RTSJ defines the RealtimeThread (RT) class to create threads, which the resident scheduler executes. RTs can access objects on the heap and therefore can incur delays because of garbage collection. Another option in creating threads is to use a subclass of RT, NoHeapRealtimeThread. NHRTs cannot access any objects on the heap, which means that they can run while the garbage collector is running (and thus avoid delays from garbage collection). NHRTs are suitable for code with a very low tolerance of nonscheduled delays. RTs, on the other hand, are more suitable for code with a higher tolerance for longer delays. Regular Java threads will do for code with no temporal constraints. MEMORY MANAGEMENT Garbage-collected memory heaps have always been considered an obstacle to real-time programming because the garbage collector introduces unpredictable latencies. We wanted the RTSJ to require the use of a real-time garbage collector, but the technology is not sufficiently advanced. Instead, the RTSJ extends the memory model to support memory management in a way that does not interfere with the real-time code’s ability to provide deterministic behavior. These extensions let you allocate both short- and long-lived objects outside the garbage-collected heap. There is also sufficient flexibility to use familiar solutions, such as preallocated object pools. Memory areas The RTSJ introduces the notion of a memory area— a region of memory outside the garbage-collected heap that you can use to allocate objects. Memory areas are not garbage-collected in the usual sense. Strict rules on assignments to or from memory areas keep you from creating dangling pointers, and thus maintain Java’s pointer safety. Objects allocated in memory areas may contain references to objects in the heap. Thus, the garbage collector must be able to scan memory outside the heap for references to objects within the heap to preserve the garbage-collected heap’s integrity. This scanning is not the same as building a reachability graph for the heap.
The collector merely adds any reference to heap objects to its set of pointers. Because NHRTs can preempt the collector, they cannot access or modify any pointer into the heap. The RTSJ uses the abstract class MemoryArea to represent memory areas. This class has three subclasses: physical memory, immortal memory, and scoped memory. Physical memory lets you create objects within memory areas that have particular important characteristics, such as memory attached to a nonvolatile RAM. Immortal memory is a special case. Figure 1 shows how object allocation using immortal memory compares to manual allocation and automatic allocation using a Java heap. Traditional programming languages use manual allocation in which the application logic determines the object’s life—a process that tends to be time-consuming and error-prone. The one immortal memory pool and all objects allocated from it live until the program terminates. Immortal object allocation is common practice in today’s hard real-time systems. With scoped memory, there is no need for traditional garbage collection and the concomitant delays. The RTSJ implements scoped memory through either the MemoryParameters field or the ScopedMemory.enter() method. Scoped memory Figure 1c shows how scoped memory compares to immortal memory and other allocation disciplines. Scoped memory, implemented in the abstract class ScopedMemory, lets you allocate and manage objects using a memory area, or syntactic scope, which bounds the lifetime of any objects allocated within it. When the system enters a syntactic scope, every use of “new” causes the system to allocate memory from the active memory area. When a scope terminates or the system leaves it, the system normally drops the memory’s reference count to zero, destroys any objects allocated within, and calls their finalizers. You can also nest scopes. When the system enters a nested scope, it takes all subsequent allocations from the memory associated with the new scope. When it exits the nested scope, the system restores the previous scope and again takes all subsequent allocations from that scope. Two concrete subclasses are available for instantiation as MemoryAreas: LTMemory (LT for linear time) and VTMemory (VT for variable time). In this context, “time” refers to the cost to allocate a new object. LTMemory requires that allocations have a time cost linear to object size (ignoring performance variations from hardware caches or similar optimizations).
You specify the size of an LTMemory area when you create it, and the size remains fixed. Although VTMemory has no such time restrictions, you may want to impose some restrictions anyway to minimize the variability in allocation cost. You build a VTMemory area with an initial size and specify a maximum size to which it can grow. You can also opt to perform real-time garbage collection in a VTMemory area, although the RTSJ does not require that. If you decide to do this, you can build the VTMemory object with a garbage collection object to specify an implementation-specific garbage-collection mechanism. However, if you implement VTMemory, NHRTs must be able to use it. Because, as Figure 1c shows, control flow governs the life of objects allocated in scoped memory areas, you must limit references to those objects. The RTSJ uses a restricted set of assignment rules that keep longer-lived objects from referencing objects in scoped memory, which are possibly shorter lived. The virtual machine must detect illegal assignment attempts and throw an appropriate exception when they occur. The RTSJ implements scoped memory in two separate mechanisms, which interact consistently: the ScopedMemory.enter() method and the memory area field of MemoryParameters. You specify either one when you create the threads. Thus, several related threads can share a memory area, and the area will remain active until the last thread has exited. This flexibility means that the application can allocate new objects from a memory area that has characteristics appropriate either to the entire application or to particular code regions. SYNCHRONIZATION In synchronization, the RTSJ uses “priority” somewhat more loosely than the conventional real-time literature. “Highest priority thread” merely indicates the most eligible thread—the thread that the scheduler would choose from among all the threads ready to run.   It does not necessarily presume a strict priority-based dispatch mechanism. Wait queues The system must queue all threads waiting to acquire a resource in priority order.
These resources include the processor as well as synchronized blocks. If the active scheduling policy permits threads with the same priority, the threads are queued first-in, firstout. Specifically, the system Figure 1. How immortal and scoped memory differ from other methods of determining an object’s life. (a) The Java heap uses automatic allocation, in which the visibility determines the object’s life (if there are no references to the object, the system can deallocate it). Automatic allocation requires a garbage collector, however, which incurs delays. (b) In allocation using the RTSJ immortal memory, the object’s life ends only when the Java virtual machine (JVM) terminates. (c) RTSJ scoped memory uses syntactic scope, a special type of memory area outside the garbage-collected heap that lets you manage objects with well-defined lifetimes. When control reaches a certain point in the logic, the system destroys any objects within it and calls their finalizers.
  Avoiding priority inversion The synchronized primitive’s implementation must have a default behavior that ensures there is no unbounded priority inversion. This applies to conventional Java code if it runs within the overall RTSJ implementation as well as to real-time threads. The priority inheritance protocol—a well-known real-time scheduling algorithm6—must be implemented by default. The specification also provides a mechanism by which you can override the default systemwide policy or control the policy to be used for a particular monitor, as long as the implementation supports that policy. The specification of monitor control policy is extensible, so future implementations can add mechanisms. A second policy, priority ceiling emulation (or highest locker), is also specified for systems that support it.6 Determinism Conforming implementations must provide a fixed upper bound on the time required for the application code to enter a synchronized block for an unlocked monitor. Sharing and communication among threads Implementers are most likely to use a combination of regular Java threads, RTs, and NHRTs. The RTSJ permits locking between different thread types—even in the most contentious case, which is between regular threads and NHRTs. 
  If an NHRT attempts to lock an object that either an RT or regular thread has already locked, priority inheritance happens as normal. There is one catch: The non-NHRT that has had its priority boosted cannot execute when the garbage collector is executing. Thus, if a garbage collection is in progress, the boosted thread is suspended until collection completes. This, of course, causes the NHRT to incur a delay because of the collection. To address this, the RTSJ provides mechanisms that allow NHRTs to communicate (a form of synchronization) with RTs and regular Java threads while avoiding garbage-collector-induced delays in the NHRTs. The RTSJ provides queue classes for communication between NHRTs and regular Java threads. Figure 2 shows a wait-free write queue, which is unidirectional from real-time to non-real-time. NHRTs typically use the write (real-time) operation; regular threads typically use the read operation. The write side is nonblocking (any attempt to write to a full queue immediately returns a “false”) and unsynchronized (if multiple NHRTs are allowed to write, they must synchronize themselves), so the NHRT will not incur delays from garbage collection. The read operation, on the other hand, is blocking (it will wait until there is data in the queue) and synchronized ( it allows multiple readers). When an NHRT sends data to a regular Java thread, it uses the wait-free enqueue operation, and the regular thread uses a synchronized dequeue operation. A read queue, which is unidirectional from nonreal-time to real-time, works in the converse manner. Because the write is wait-free, the arrival dynamics are incompatible and data can be lost within it. To avoid delays in allocating memory elements, class constructors statically allocate all memory used for queue elements, giving the queue a finite limit. If the regular thread is not removing elements from the queue at a high-enough rate, the queue may become full.
  全套毕业设计论文现成成品资料请咨询微信号:biyezuopin QQ:2922748026     返回首页 如转载请注明来源于www.biyezuopin.vip  

                 

打印本页 | 关闭窗口
  下一篇文章:Java的实时规范
本类最新文章
The Honest Guide Sonar Target Cla Process Planning
Research on the Sustainable Land UniCycle: An And
| 关于我们 | 友情链接 | 毕业设计招聘 |

Email:biyeshejiba@163.com 微信号:biyezuopin QQ:2922748026  
本站毕业设计毕业论文资料均属原创者所有,仅供学习交流之用,请勿转载并做其他非法用途.如有侵犯您的版权有损您的利益,请联系我们会立即改正或删除有关内容!