bookmarks  4

  •  

    JSefa (Java Simple exchange format api) is a simple library for stream-based serialization of java objects to XML, CSV, and FLR (extensible to other formats) and back again using an iterator-style interface independent of the serialization format. The mapping between java object types and types of the serialization format (e. g. xml complex element types) can be defined either by annotating the java classes or programmatically using a simple API. The current implementation supports XML, CSV and FLR (Fixed Length Record) - for XML it is based on JSR 173. JSR 173 (Stax) is a popular stream-based XML API for java providing an iterator-style interface ("pull"-mechanism in contrast to the "push"-mechanism provided by SAX). But JSR 173 defines a low-level API not designed for directly serializing java objects and back again. On the other hand traditional high-level APIs like JAXB or Castor are not stream-based, so that reading a xml document will generate java objects holding the data of the complete xml document in memory at the same time. Even the integration of StAX into JAXB 2.0 is only a first step to high-level streaming, as two independent APIs have to be used in parallel. JSefa provides a convenient and performant approach to high-level streaming using an iterator-style interface. It has a layered API with the top layer allowing the streaming to be independent of the serialization format type (XML, CSV or whatever). The current implementation provides support for XML, CSV, and FLR.
    16 years ago by @gresch
    (0)
     
     
  •  

     
  •  

    cstream is a general-purpose stream-handling tool like UNIX dd, usually used in commandline-constructed pipes. Features: * Sane commandline switch syntax. * Exact throughput limiting, on the incoming side. Timing variance in previous reads are counterbalanced in the following reads. * Precise throughput reporting. Either at the end of the transmission or everytime SIGUSR1 is received. Quite useful to ask lengthy operations how much data has been transferred yet, i.e. when writing tapes. Reports are done in bytes/sec and if appropriate in KB/sec or MB/sec, where 1K = 1024. * SIGHUP causes a clean shutdown before EOF on input, timing information is displayed. * Build-in support to write its PID to a file, for painless sending of these signals. * Build-in support for fifos. Example usage is a 'pseudo-device', something that sinks or delivers data at an appropriate rate, but looks like a file, i.e. if you test soundcard software. See the manpage for examples. * Built-in data creation and sink, no more redirection of /dev/null and /dev/zero. These special devices speed varies greatly among operating systems, redirecting from it isn't appropriate benchmarking and a waste of resources anyway. * Accepts 'k', 'm' and 'g' character after number for "kilo, mega, giga" bytes for overall data size limit. * "gcc -Wall" clean source code, serious effort taken to avoid undefined behavior in ANSI C or POSIX, except long long is required. Limiting and reporting works on data amounts > 4 GB.
    17 years ago by @gresch
    (0)
     
     
  •  

    StreamCruncher is an Event Processor. It supports a language based on SQL which allows you to define "Event Processing" constructs like Sliding Windows, Time Based Windows, Partitions and Aggregates. Such constructs allow for the specification of boundaries (some are time sensitive) on a Stream of Events that SQL does not provide. Queries can be written using this language, which in turn can be used to monitor streams of incoming events. Multi-Stream Pattern Matching a.k.a Event Correlation is also possible. StreamCruncher is a multi-threaded Kernel that runs on Java™.
    18 years ago by @gresch
    (0)
     
     
  • ⟨⟨
  • 1
  • ⟩⟩

publications  

    No matching posts.
  • ⟨⟨
  • ⟩⟩