Recent talks
Abstract
In domain theory, partial information is accumulated and compiled
using directed joins. In the practice of security, cryptanalysts have
their own methods to accumulate and compile partial information, eg
about the key of a cypher. It turns out that a relation, used for
reasoning about secrecy in the algebraic model, is closely related
with the domain theoretic way-below relation, and that some of the
secrecy derivations can be viewed in this framework. However, the
practice of security leads beyond the algebraic model: the
cryptanalysts usually compile some statistical biases. It is
interesting to see what happens when the domain theoretic structures,
tacitly present in their work, are generalized from the algebraic to
the probabilistic model of secrecy.
Abstract
Trust is often conveyed through delegation, or through
recommendation. This makes the trust authorities, who process and
publish trust recommendations, into an attractive target for attacks
and spoofing. In some recent empiric studies, this was shown to lead
to a remarkable phenomenon of adverse selection: a greater
percentage of unreliable or malicious web merchants were found among
those with certain types of trust certificates, then among those
without. While such findings can be attributed to a lack of diligence
in trust authorities, or even to conflicts of interest, our analysis
of trust dynamics suggests that public trust networks would probably
remain vulnerable even if trust authorities were perfectly
diligent. The reason is that the process of trust building, if trust
is not breached too often, naturally leads to power-law distributions:
the rich get richer, the trusted attract more trust. The evolutionary
processes with such distributions, ubiquitous in nature, are known to
be robust with respect to random failures, but vulnerable to adaptive
attacks. We recommend some ways to decrease the vulnerability of trust
building, and suggest some ideas for exploration.
See also
arxiv.org/abs/0808.0732
Abstract
We explore a simple mathematical model of network computation, based
on Markov chains. Similar models apply to a broad range of
computational phenomena, arising in networks of computers, as well as
in genetic, and neural nets, in social networks, and so on. The main
problem of interaction with such spontaneously evolving computational
systems is that the data are not uniformly structured. An interesting
approach is to try to extract the semantical content of the data from
their distribution among the nodes. A concept is then identified by
finding the community of nodes that share it. The task of data
structuring is thus reduced to the task of finding the network
communities, as groups of nodes that together perform some non-local
data processing. Towards this goal, we extend the ranking methods from
nodes to paths. This allows us to extract some information about the
likely flow biases from the available static information about the
network.
See also
arxiv.org/abs/0802.1306
Abstract
Originally, quantum probability theory was developed to analyze
statistical phenomena in quantum systems, where classical probability
theory does not apply, because the lattice of measurable sets is not
necessarily distributive. On the other hand, it is well known that the
lattices of concepts, that arise in data analysis, are generally also
non-distributive, albeit for completely different reasons. In his
recent book, van Rijsbergen argues that
many of the logical tools developed for quantum systems are also
suitable for applications in information retrieval. I explore the
mathematical support for this idea on an abstract vector space model,
covering several forms of data analysis (information retrieval, data
mining, collaborative filtering, formal concept analysis...), and
roughly based on an idea from categorical quantum mechanics. It turns
out that quantum
(i.e., noncommutative) probability distributions arise already in this
rudimentary mathematical framework. Moreover, a Bell-type inequality
is formulated for the standard data similarity measures, interpreted
in terms of classical random variables.
See also
arxiv.org/abs/0802.1296
Abstract
Near Field Communication (NFC) is one of the short range technologies
currently introduced in mobile phones. Besides emulating smart cards,
it enables a wide range of applications, embedding social
computation into physical spaces. Many of these applications give
raise to interesting security problems, and some unexpected
solutions.
In the talk, I outline some of the new security tasks and tools
arising from NFC and other pervasive computation technologies, and
discuss some of the extensions of the standard security models and
concepts, needed to accommodate reliable security proofs, especially
needed in the rapidly expanding social network applications. Such
extensions are conveniently managed in the derivational approach,
developed in our previous work, where secure protocols are built
incrementally, together with the corresponding security proofs,
through a sequence of model refinements. As the time permits, I shall
present the derivations (with the tool support in Protocol Derivation
Assistant) of some generic authentication templates, based on timed
channels (distance-bounding), social channels, routing channels, and
some combinations thereof.
Abstract Function abstraction is a building block of
computation. Indeed, in functional programming and lambda calculus,
it is the building block. Using the framework of
categorical quantum mechanics, I examine the role of function
abstraction in quantum computation. It induces the structure of
Frobenius algebra structure of classical objects, and allows
distinguishing classical data from quantum data.
|
|