This section describes the archicture of Package Drone.
Package Drone consists of a few core concepts. This section will explain these concepts.
The main entity (beside an artifact) is a channel. A channel simply is a container for artifacts. A channel is identified by a unique ID which can never change. Additionally a channel may have an alias name, which is also unique, but may change over time. The ID is generated by the system itself while the alias can be defined by the user.
Each channel has its own set of metadata. A channel can have provided metadata and extracted metadata (also see Section 5.2.1.5, “Metadata”). The extracted metadata is the metadata produced during the aggregation run of a channel. Each channel aggregator can add metadata during this process which will then be stored on channel level.
The functionality of channels is defined by “channel aspects”. A channel without aspects simple stores BLOBs, but does no further processing on these BLOBs.
A channel can also be locked. This puts the channel in “read-only mode”.
An artifact is simply a BLOB stored in a channel. Artifacts can be created and deleted, but never modified. This means that the name and data section of an artifact can never change. The same ID gives the same data. Until it is deleted.
An artifact has a unique ID. The ID is not only unique inside the channel but on unique to the whole system. It is generated by Package Drone when the artifact is stored. The artifact also has a name. The name is not unique. So the same name can appear multiple times in a chanel.
Artifacts also have provided and extracted metadata (also see Section 5.2.1.5, “Metadata”). The extracted meta data is created by meta data extractors.
There are different types of artifacts. The default, “stored artifacts” which where
uploaded either manually or by a deployment process (like mvn deploy
). Each stored
artifact can be parent to child artifacts, which also get greated manually or using a
deployment process.
Then there are virtual artifacts. These get created by Virtualizers and are like children artifacts. However, these cannot be deleted manually.
Another artifact type are the “generator” and “generated” artifacts (see Section 5.2.1.4, “Generators”). These are like virtual artifacts, with the difference that a specific generator implementation will be used to create the “generated” child artifacts.
Channel aspects extend the functionality of a channel. When a channel aspect is assigned to a channel, the artifacts and channel gets reprocessed so that the extracted metadata and channel aggregation state is up to date.
When a channel aspect is removed from the channel, all metadata and virtual artifacts it created will be removed.
Each channel aspect can choose to implement one or more of the following extension points.
This extension will be triggered when a new artifact is stored. It will get the BLOB and can extract metadata which will then be stored with the artifact.
The channel aggregator will be called at least once after the channel has changed. It can aggregate information over the whole channel and provide the result as metadata on the channel level.
An artifact virtalizer has the ability to create virtual child artifacts for an artifacts. No matter what type of artifact that is.
The virtualizer has access to the full meta data of the artifact it gets called for and can create any number of virtual child artifacts for this parent artifacts.
Generators transfor “generator artifacts” into one ore more “generated artifacts”, which are children of the “generator artifact”.
Also the BLOB data of the generator artifact cannot be changed after it was added. But the provided meta data can be edites as needed. So in the most cases the generators will use the provided meta data of the generator artifact and create their generated artifacts.
Generator also get asked if they require re-generation when the channel was changes (e.g. an artifact was added or removed to the channel). If the generator require re-generation then it will be regenerated before the current operation which is in progress is being finished.
Artifacts and channels may have “provided” and “extracted” meta data. While extracted meta data is being generated by some functionality of the channel aspects, the provided meta data can be edited and provided when the artifacts is being created.
Meta data is actually a map of meta data key to value. And a meta data key is a combination of a namespace and a key. The namespace is in most cases the ID of the channel aspect.
So for example the “Hasher” channel aspect, which creates hash sums of the BLOB
has the channel aspect id hasher
and stores the MD5 checksum as md5
.
So the namespace would be hasher
, the key md5
and the value the MD5
checksum of the BLOB.
If the same meta data key is present in the provided and extracted meta data, then the value of the provided meta data will override the value of the extracted meta data.
When a channel aspect is removed both provided and extracted meta data of this channel aspect will be deleted, by deleting all entries where the channel aspect id matches the namespace of the meta data entry.
Package Drone is built on Java 8 and OSGi. As OSGi container Eclipse Equinox is used.
The embedded web server is provided by Jetty directly. At the moment Package Drone does not use the OSGi HTTP Service mechanism, but a direct Jetty instance in order to solve some issues with JSP.
For the Web UI Bootstrap 3.x is used.
The database based persistence if provided by EclipseLink which is embedded into OSGi using Eclipse Gemini JPA and Eclipse Gemini DBaccess. DBaccess implements the OSGi specification for JDBC drivers, so that these can be used as OSGi service. Gemini JPA bootstraps EclipseLink, detects and activates bundles which are JPA units, and registers an EntityManagerFactory for each of them as OSGi service.
As already mentioned, Package Drone uses Jetty as web server. However it does currently not use the OSGi HTTP Service variant. For more information see also:
There are two entry points which are currently used. The adapter servlet use the method
described here.
and directly provide a /WEB-INF/web.xml
which defines a servlet.
The second entry point is used by all web UI elements. A central Jetty Context is registered which
hosts the DispatcherServlet
,
which in fact dispatches request to registered
web controllers.
The web dispatching functionality is greatly inspired by the Spring WebMVC framework [4]. However it is not directly compatible with it, although some class names might look like.
The dispatcher servlet picks up all web controllers, filters and interceptors and integrates these into the main servlet context. It works like a bridge between OSGi services and the JEE parts of Jetty.
A web controller is a service, registered with OSGi, which provides Java methods which in fact handle incoming web requests. Each method is bound to a URL and request method (GET, POST, …). The method call can have parameters, which are bound to values coming from the URL, the HTTP request or internal processing.
These are JEE like Filters, which get registered with OSGi, picked up by the main web context and registered with the dispatching. Filters will be bound to all servlets in the main context.
These intercept calls to the web controllers only. But they will have access to the values which go in and out of these controllers.
As seen in Figure 5.1, “Web dispatching” the Jetty core service takes the Servlet
from Bundle A
, defined by the Context path in the MANIFEST.MF
and the servlet definition in the WEB-INF/web.xml
. And it also takes the
Context
service instance registered on the OSGi service bus. In this case
Bundle B
directly wired the “Dispatcher Servlet” to the context
and provides some additional mechanisms order to look up JSP resources in OSGi.
The Dispatcher servlet however again waits for services registered with OSGi, which have the
@Controller
annotation assigned, or are Filters or Interceptors. The default
dispatcher servlet is registered a the context root path. And, in this case, will forward
requests for /c1
to the controller service in Bundle C
.
Since also an interceptor in Bundle D
is registered, the request to /c1
will go through this interceptor. Where the requests to “Servlet A” will not go
through this interceptor.
[4] In fact the WebMVC was used in the beginnging. However, Spring and OSGi don't go well together. And although this might sound strange, but Spring is not modular enough for OSGi. Spring is modular, but not dynamic. So one Spring is fired up, it can hardly be re-configured and does not support services coming and going like OSGi does.