Edudorm Facebook

Supercomputing and Visualization in Computer Science

    • Supercomputing and Visualization in Computer Science
  • 2.1. What is meant by pre-attentive?
  • Pre-attentive can be defined as visual processing of a certain item before the act of selection.
  • 2.2. Referring to the journal articles we examined, when does shape versus color come into play? How does affect pre-attentive observations on data sets?
  • Color in the normally is the function of the measured value for a certain data sets and it is believed to change dynamically with the rate of that value.  The role of shape is to attract attention prior to pre-attentive vision.  Color is easier to be observed and detected as compared to shape.  Color and shape are considered to be unique elements which can be noticed and help to distinguish an object at a glance. Pre-attentive level has infinite capacity and is associated by the low level vision.  Color and shape give better perception in regard to pre-attentive observations on data sets. Shape is responsible for boundary detection while color important for target detection. Generally, shape and color helps to discover the presence and absence of data sets in a scene (Grader & McGibbon, 2007). 
  • 2.3. A major issue for visualization is compliance with the Americans with Disability
  • Act (ADA) and the amendments to ADA in 2008.
  • 2.3.1. Which sorts of disabilities most affect use of typical visualized data or simulations?
  •  Disabilities affect the use of typical visualized data or stimulations. These include visual disabilities, blind impairment, blind and visual impairment information and mental health and emotional disabilities.
  • 2.3.2. For visual disabilities (ones that cannot be mitigated by readily prescribed corrective lenses), which ones are most typical and what are some things that can be done by design to mitigate these?
  • 2.3.3. What methods would you suggest for pre-attentive processing that could still be ADA compliant?
  • Psychophysical method can be adapted and usually it helps to drive sensitivity of perceptually images or data available. This method helps to determine the dimensions of early visual processes. Seed-expansion method can be used to derive simple huma
  •  
  • 3.1. What are the steps we discussed used to develop a visualization for a project? (Hint: these steps are related to the general steps involved in software engineering.)
  •             There are four fundamental steps to developing a visualization project. These steps are analysis, design, implementation and testing.
  • Analysis
  •             In analysis, the requirements of the project are defined, and how they will be accomplished. The requirements phase defines the problems and what the software will be required to do. The end of this step is a requirement document which states what is to be built and tries to capture the requirements by defining the goals of the project. The requirement document specifies information at the high level of description. The requirement document describes the things in the system and the actions that can be done on these things. But this does not imply an architecture design rather a description of the artifacts of the system and how they behave. The analysis team develops the requirement document which should include states, events and typical scenarios of usage.
  • Design
  •             In this step, the architecture is established. It starts by mapping the requirement in the analysis and defining the components, their interfaces and behaviors. The deliverable design document is the architecture and it describes a plan to implement the requirements in the analysis phase (Grader & McGibbon, 2007). A critical implementation priority leads to a task that has to be done right. If it fails, the project fails if it succeeds the project might succeed.
  • Implementation
  •             In this step, the team builds the component from scratch. Given the architecture document from design and the requirement document from the analysis, the team should build what has been requested though there is room for innovation and flexibility.
  • Testing
  •             This step is performed by a different team after the implementation is completed. This is because it’s hard to see one’s own mistake and a fresh eye can discover errors faster than the person who has re-read the material many times.
  • 3.2. In the real world, the visualization developer often has to use tools that are pre- existing; in many cases, this is good object oriented practice in that it emphasizes a generalized form of reusability. We explored some of these tools. List sev- eral classes of tools we have used. (Hint: compilers, interpreters, and operating environments all fit within the generalized concept of tools.)
  •             Classes of tools used in visualization development are;
  • System Design and Modeling Tools, Interoperability Tools, Integrated Development Environment (IDE), Embedded Operating systems, Compilers, Assemblers, Libraries, Linkers, Post-link optimizers, Simulators, Debuggers and monitors ROMs (Read Only Memory), Automated test systems, Profiling tools (Grader & McGibbon, 2007).
  • 3.3. We have considered open source versus closed source, including licensed for fee, en- vironments and applications. Compare and contrast the differences among these, including a discussion of the Free Software Foundation Gnu Public License. Is the issue simply one of license price?
  •  
  •             Open source is free and requires a certain level of technical expertise to manage it. Long-term-costs include the cost of implementation, innovation, opportunity cost with service providers and cost associated with investment in infrastructure. On the other hand, closed source cost varies from a few hundred dollars depending on the system. It is customized and includes high levels of security and functionality, continuous innovations, a greater ability to scale, ongoing training and support. It requires low technical skill. Although providers charge for additional services and integration, they assist in reducing the gap in costs between the two options (Grader & McGibbon, 2007).
  •  
  •  
  •  
  • References
  • Grader, M & McGibbon, T (2007), A Survey and Review of Software Development Tools for      Development of Embedded Systems
  •  
  •  
  •  
950 Words  3 Pages
Get in Touch

If you have any questions or suggestions, please feel free to inform us and we will gladly take care of it.

Email us at support@edudorm.com Discounts

LOGIN
Busy loading action
  Working. Please Wait...