Periklis A. Papakonstantinou
assistant professor at Rutgers & member of DIMACS
navigate
publications 
students 
teaching 
talks & notes 
about me 
contact information
Rutgers University, Rutgers Business School
Management Science and Information Systems
94 Rockafeller Rd.  Room 229
Piscataway, NJ 08854 [ ]
office phone: 8484459225
email: periklis.research@gmail.com
research news
»»
»
»
Depth reduction turns a parallel algorithm into a superparallel algorithm.
We resolve (in the positive) the depthreduction question that was open for 26 years; since the seminal work of Yao and BeigelTarui from 1990. We improve exponentially that work. In addition, following the program of Ryan Williams we obtain a superexponential improvement on the depth lower bound separating ACC from NEXP.
Unconditional separation of complexity classes happens quite rarely. Interestingly, our improvement happens as a direct corollary of our depthreduction algorithm (i.e. the lower bound follows because of the existence of this algorithm).
We empirically observe that human translations (e.g. from Chinese to English) are in fact lowdistortion embeddings; if one removes about 10% of idiomatic expressions. Motivated by the above, we introduce the study of doing Machine Learning for learning concepts that are lowdistortion embeddings. Also, this is one of the first works that formally/rigorously address issues in multiclass classification.
We obtain a strong negative result (nofreelunch theorem). Hence, we also have a formal explanation why language models (and other constructs that make structural assumptions about the correct class) are in a sense necessary.
The main research direction this paper introduced seems to be both very intriguing and unexplored. One of the goals is to bring mathematical foundations to practical multiclass classification.
SAT and CNFSAT is the prototypical NPhard problem. What if instead of looking at the worstcase behavior of SAT we consider formulas that are sampled uniformly at random? Random SAT has a statistical behavior resembling phenomena in Statistical Physics. For Random SAT this means that the computational difficulty of the problem is governed by the ratio of clauses and variables.
For every Random kSAT there is one fixed ratio γ which determines a socalled "phase transition phenomenon". In particular, if we sample formulas whose ratio is below γ then it is computationally easy to decide whether the given formula is satisfiable or not. If the ratio of the formulas is above γ then the formula is almost certainly nonsatisfiable. (parenthetical remark: the value γ has been determined experimentally and is an important open question to prove this mathematically)
We devise a new SATalgorithm for RandomSAT that improves on the stateoftheart. Importantly, we introduce a theory to analyze and formally compare RandomSAT heuristics. More precisely, we introduce a model that allows us to understand the limitations of previous localsearch algorithms, then modify them and formally show how to outperform them, and finally obtain empirical results that are completely in line with our theory.
Big Data is a fashionable buzzword that carries a considerable amount of research activity. So far, every work in Big Data focused on how to make use of Big Data, i.e. aimed to discover useful structure. Our take is completely different. We view Big Data as a source of low quality randomness. Our goal is to extract provably "purely" uniform distributed random bits.
The main technical obstacle is that each sample from a Big Data Source is big...
Formally, the processing has to happen in a streaming fashion. Before our work it was shown that if one uses one stream for randomness extraction, then this is provably impossible. We instead develop a theory for randomness extraction using a data streaming machine with two streams. More importantly, we do extensive experiments on 12 types of big data sources that one commonly finds in the Internet. Our experimental results overwhelmingly indicate that this, firstever, Streaming Randomness Extractor outperforms every previous practical extraction method, including those based on quantum randomness expanders.
students
I learn a lot by teaching, mentoring, and working together with the following brilliant and passionate students» PhD students
current PhD Students (alphabetic order):
completed PhDs (reverse chronological order):
 Shiteng Chen (3/2010  6/2016)
now assistant professor at the Chinese Academy of Sciences (ISCAS) [thesis]  Guang Yang (9/2011  12/2015)
now assistant professor at the Chinese Academy of Sciences (ICT) [thesis]  Hao Song (3/2010  6/2014)
now engineer at Facebook, Menlo Park, CA [thesis]  Bangsheng Tang (3/2010  6/2013)
now senior researcher & head of research at Hulu, Beijing Office [thesis] 
* the above are officially supervised students (unofficially supervised several others)
» undergraduate student mentorship and supervision
 Brad Bentz (5/2016  7/2016): intern from Brown University, through REU DIMACS program
 Yuping Luo (3/2014  current): typical longterm undegraduate supervision (Tsinghua student)

* in Rutgers I am regularly supervising Capstone projects for MSc students in MSIS
and Computer Science
» completed MSc and Undergraduate Thesis supervision
 Lei Zhixiang (undergraduate thesis, Tsinghua, 2016): now PhD student at Harvard
 Sixue Liu (MSc thesis, Tsinghua, 2016): now (deferred) PhD student at Cambridge University
 Yuanzhi Li (undergraduate thesis, Tsinghua, 2014): now PhD student at Princeton
 Xin Yang (undergraduate thesis, Tsinghua, 2014): now PhD student at the U. Washington
 Mao Jieming (undergraduate thesis, Tsinghua, 2013): now PhD student at Princeton
 Silvio Frischknecht (MSc thesis, ETH, 2012, cosupervised with Roger Wattenhofer)
teaching
2016  2017 teaching:
 Fall 2016:
Information Security and Data Privacy (PhD)  (to be linked here later)
Rutgers students: here is the syllabus
» past courses at Tsinghua (6 undergraduate & 5 graduate courses)
» before Tsinghua (5 undergraduate courses)
talks & notes
» talks
Rutgers students: for slides from my Rutgers courses email me
Below are slides for some of the research talks I gave (last 12 years)
Slides only readable with Apple's Keynote
 One, two, many streams [ Keynote slides ]  for REU students at DIMACS
 Efficient depthreduction for composites is possible [ Keynote slides ]
 Streaming extraction [ Keynote slides ]
 Baggingbydesign [ Keynote slides ]
* restribution, modification of these or my course slides only with my written permission
talk schedule (recent & upcoming)
When  Title  Where 

Jul. 2016  One, two, many streams  DIMACS REU 
Apr. 2016  Efficient depthreduction for composites is possible  Columbia 
Mar. 2016  〃  Princeton 
Dec. 2015  〃  Kyoto 
Oct. 2015  Bagging in the real world  Rutgers (Applied Prob.) 
Sep. 2015  Big Extraction  Rutgers (Theory) 
» notes
The intented use is as a supplement to a comprehensive textbook

A crash course in probability [ pdf ]
 a mild introduction to measure concentration for Business students 
Advanced algorithms notes [ pdf ]
 from an undergraduate class I taught regularly in the Elite CS class @ Tsinghua 
Elementary counting [ pdf ]
 an introduction to combinatorial enumeration (assuming no prerequisites)
about me
» my research
I am thinking about problems in the meeting point of computer science with its mathematical foundations and also applications such as cryptography, machine learning, big data, and their intersection.
My research carries (i) a purely foundational component and (ii) a second component driven by important practical challenges. I try to keep these two research lives separate. Whenever I do Theory I just do Theory and whenever I do Applications I just do Applications. However, all stem from the same source. I approach practical problems by "injecting technology" from theory works developed over the years.
» theoretical research
I like most working on things that are provable, using techniques that blend combinatorics with algebra, group representations, geometry and functional analysis. For example, I am thinking about questions such as:
 can every randomized algorithm be transformed into an always correct, efficient, deterministic algorithm?
 are complexity classes such as P and NP (and their circuit complexity analogs) different?
 can we universally transform parallel algorithms to extremelly parallel ones?
 can we trade memory for running time in SATsolvingalgorithms?
 can Cryptography be done using very limited devices (e.g. Cryptography over Massively Streamed Data) ?
» applicationsoriented research
I care about practical problems in their genuine form. For instance, part of my contribution in machine learning bridges theory and practice by improving practical heuristics (in theory and significantly in experiments) and in particular for very multiclass classification tasks. I am primarily interested in:
 developing foundations for very multiclass structured prediction problems as they appear in NLP
 constructing new heuristics that strictly (and provably) outperform widely used practical heuristics
Theory can drive innovation in heuristics and in transformative ways. The technical challenge is that in certain cases it's best not to use that heuristic at all (because using it hurts performance or accuracy). So, how to compare algorithms that occasionally make things worse without making assumptions about the inputs? Practically relevant theory (what's the "right level for doing theory"?) could provide frameworks for constructing and comparing heuristics in meaningful ways; i.e. a theory that formalizes the notion "assume practicioners know what they are doing, now, on top of that what's the next better thing to do?".
» at Rutgers
Since September 2015 I am an assistant professor at the operations research and related information systems department, aka MSIS, at Rutgers Business School. This setting works very well for my foundational and applied projects. Rutgers is a great place for foundational work: splendid operations research, theory of computing, and theoretical statistics groups. It also has DIMACS to which I am actively involved. At the same time a faculty at MSIS has the privilege of being exposed to important, actual real life problems.
» before Rutgers
From March 2010 until August 2015 I was with Tsinghua University (which I left for Rutgers). At Tsinghua I was an assistant professor and PhD supervisor at Andrew Yao's institute (Institute for Interdisciplinary Information Sciences). There, I taught some very bright undergraduates, at the "Tsinghua Elite CS class", and I founded the Laboratory for the study of Randomness. This lab was at the frontiers of research in the foundations of computing and related application areas. In total there are four PhD students graduated under my supervision, two of whom graduated when I was at Tsinghua.
I took up the assistant professor position at Tsinghua immediately after my PhD in computer science from the University of Toronto on June 2010. A nontrivial experience from my student life is reading pure math (got an MSc in Mathematics) simultaneously to in Computer Science (I also hold two more MScs: in Computer Science and MSc/BEng in Computer Engineering & Science and obtained an Electronics Engineer license).
Since September 2015 I live permanently in Hoboken taking the long commutes to Rutgers.
Constructions, Lower Bounds, and New Directions in Cryptography and Computational Complexity
Department of Computer Science, University of Toronto, 2010Advisor: Charles Rackoff
[pdf]
Abstract
In the first part of the thesis we show blackbox separations in public and privatekey cryptography. Our main result answers in the negative the question of whether we can base Identity Based Encryption (IBE) on Trapdoor Permutations. Furthermore, we make progress towards the blackbox separation of IBE from the Decisional DiffieHellman assumption. We also show the necessity of adaptivity when querying oneway permutations to construct pseudorandom generators la GoldreichLevin; an issue related to streaming models for cryptography.
In the second part we introduce streaming techniques in understanding randomness in efficient computation, proving lower bounds for efficiently computable problems, and in computing cryptographic primitives.
We observe [Cook'71] that logarithmic spacebounded Turing Machines, equipped with an unbounded stack, henceforth called Stack Machines, together with an external random tape of polynomial length characterize RP; BPP an so on. By parametrizing on the number of passes over the random tape we provide a technical perspective bringing together Streaming, Derandomization, and older works in Stack Machines. Our technical developments relate this new model with previous works in derandomization. For example, we show that to derandomize parts of BPP it is in some sense sufficient to derandomize BPNC (a class believed to be much lower than P subset of BPP). We also obtain a number of results for variants of the main model, regarding e.g. the fooling power of Nisan's pseudorandom generator (PRG) [Nisan'92] for the derandomization of BPNC^1, and the relation of parametrized access to NPwitnesses with width parametrizations of SAT.
A substantial contribution regards a streaming approach to lower bounds for problems in the NChierarchy (and above). We apply Communication Complexity to show a streaming lower bound for a model with an unbounded (freetoaccess) pushdown storage. In particular, we obtain a n^O(1) lower bound simultaneously in the space and in the number of passes over the input, for a variant of inner product. This is the first lower bound for machines that correspond to polysize circuits, can do Parity, Barrington's language, and decide problems in PNC, assuming EXP not equal PSPACE. Finally, we initiate the study of logspace streaming computation of cryptographic primitives. We observe that the work on Cryptography in NC^0 [Applebaum, Ishai, Kushilevitch '06] yields a nonblackbox construction of a oneway function computable in an O(log n)space bounded streaming model. Also, we show that relying on this work is in some sense necessary.
publications
Remark: author order in all papers is alphabetical
Derandomization and Lower Bounds
 Abstract
 Abstract
 Abstract
 Oblivious branching programs over alphabet {0, 1} of length kn and size 2^O(n/logn) on inputs of size n.
 Nonoblivious branching programs over alphabet Sigma of length kn, provided the size of Sigma is a power of 2 and sufficiently large in terms of k.
 The model of logarithmic space randomized Turing Machines (over alphabet {0,1}) extended with an unbounded stack that make k passes over their randomness.
 Abstract
 Abstract
 Abstract
 If p = 1 (oneway access) and the error is onesided unbounded, S is equivalent to deterministic T.
 If p = infty (unrestricted access), S is equivalent to randomized T (with the same error).
 If p = 1 and the error is twosided unbounded, S is still equivalent to deterministic T.
 If p = 2 and the error is unbounded, S is already equivalent to randomized T (with the same error). In the bounded error case, we consider a logarithmic space Stack Machine S that is allowed p passes over its randomness. Of particular interest is the case p = 2^(log n)^i , where n is the input length, and i is a positive integer. Intuitively, we show that S performs polynomial time computation on its input and parallel (preprocessing plus NC^i) computation on its randomness. Formally, we introduce Randomness Compilers. In this model, a polynomial time Turing Machine gets an input x and outputs a (polynomial size, bounded fanin) circuit Cx that takes random inputs. Accep tance of x is determined by the acceptance probability of C(x). We say that the randomness compiler has depth d if Cx has depth d(x). As our third contribution, we show that:
 S simulates, and is in turn simulated by, a randomness compiler with depth O((logn)^i), O((logn)^{i+1}), respectively.
 Abstract
Depthreduction for composites
Shiteng Chen, Periklis A. Papakonstantinou
Foundations of Computer Science (FOCS), 2016
Depthreduction for composites
Shiteng Chen, Periklis A. Papakonstantinou
Foundations of Computer Science (FOCS), 2016
We obtain a new depthreduction construction, which implies a superexponential improvement in the depth lower bound separating $NEXP$ from nonuniform $ACC$.
In particular, we show that every circuit with $AND,OR,NOT$, and $MOD_m$ gates, $m\in\mathbb{Z}^+$, of polynomial size and depth $d$ can be reduced to a depth$2$, $SYM\circ AND$, circuit of size $2^{{(\log n)}^{O(d)}}$. This is an exponential size improvement over the traditional YaoBeigelTarui, which has size blowup $2^{{(\log n)}^{2^{O(d)}}}$. Therefore, depthreduction for composite $m$ matches the size of the AllenderHertrampf construction for primes from 1989.
One immediate implication of depth reduction is an improvement of the depth from $o(\log\log n)$ to $o(\log n/\log\log n)$, in Williams' program for $ACC$ circuit lower bounds against $NEXP$. This is just short of $O(\log n/\log\log n)$ and thus pushes William's program to the $NC^1$ barrier, since $NC^1$ is contained in $ACC$ of depth $O(\log n/\log\log n)$. A second, but nonimmediate, implication regards the strengthening of the $ACC$ lower bound in the ChattopadhyaySanthanam interactive compression setting.to appear
Correlation lower bounds from correlation upper bounds
Shiteng Chen, Periklis A. Papakonstantinou
Information Processing Letters (IPL), 2016
Correlation lower bounds from correlation upper bounds
Shiteng Chen, Periklis A. Papakonstantinou
Information Processing Letters (IPL), 2016
We prove a $2^{O\left( \frac{n}{d(n)} \right) }$ lower bound on the correlation of $MOD_m\circ AND_{d(n)}$ and $MOD_r$, where $m,r$ are positive integers. This is the first nontrivial lower bound on the correlation of such functions for arbitrary $m,r$. Our motivation is the question posed by Green et al. [GRS05], to which the $2^{O\left( \frac{n}{d(n)} \right) }$ bound is a partial negative answer. We first show a $2^{\Omega(n)}$ correlation upper bound that implies a $2^{\Omega(n)}$ circuit size lower bound. Then, through a reduction we obtain a $2^{O(\frac{n}{d(n)})}$ correlation lower bound. In fact, the $2^{\Omega(n)}$ size lower bound is for $MAJ\circ ANY_{o(n)}\circ AND\circ MOD_r\circ AND_{O(1)}$ circuits, which is of independent interest.
@ARTICLE{CP16a,
author = {Shiteng Chen and Periklis A. Papakonstantinou },
title = {Correlation lower bounds from correlation upper bounds},
journal = {Information Processing Letters (IPL)},
year = {2016},
volume = {116},
pages = {537540},
number = {8} }Pseudorandomness for Linear Length Branching Programs and Stack Machines
Andrej Bogdanov, Periklis A. Papakonstantinou, Andrew Wan
RANDOM, 2012
Pseudorandomness for Linear Length Branching Programs and Stack Machines
Andrej Bogdanov, Periklis A. Papakonstantinou, Andrew Wan
RANDOM, 2012
We show the existence of an explicit pseudorandom generator G of linear stretch such that for every constant k, the output of G is pseudorandom against:
@INPROCEEDINGS{BPW11,
author = {Andrej Bogdanov and Periklis A. Papakonstantinou and Andrew Wan},
title = {Pseudorandomness for Linear Length Branching Programs and Stack Machines},
booktitle = {RANDOM},
year = {2012},
month = {August},
pages = {447458},
address = {Boston, USA}
}Pseudorandomness for ReadOnce Formulas
Andrej Bogdanov, Periklis A. Papakonstantinou, Andrew Wan
Foundations of Computer Science (FOCS), 2011
Pseudorandomness for ReadOnce Formulas
Andrej Bogdanov, Periklis A. Papakonstantinou, Andrew Wan
Foundations of Computer Science (FOCS), 2011
We give an explicit construction of a pseudorandom generator for readonce formulas whose inputs can be read in arbitrary order. For formulas in n inputs and arbitrary gates of fanin at most d = O(n/ log n), the pseudorandom generator uses (1  Omega(1))n bits of randomness and produces an output that looks 2^{Omega(n)}pseudorandom to all such formulas.
Our analysis is based on the following lemma. Let p = Mz+e, where M is the paritycheck matrix of a sufficiently good binary errorcorrecting code of constant rate, z is a random string, e is a smallbias distribution, and all operations are modulo 2. Then for every pair of functions f,g: {0,1}^n/2>{0,1} and every equipartition (I,J) of [n], the distribution p is pseudorandom for the pair (f(xI),g(xJ)), where xI and xJ denote the restriction of x to the coordinates in I and J, respectively.
More generally, our result applies to readonce branching programs of bounded width with arbitrary ordering of the inputs. We show that such branching programs are more powerful distinguishers than those that read their inputs in sequential order: There exist (explicit) pseudorandom distributions that separate these two types of branching programs.
@INPROCEEDINGS{BPW11,
author = {Andrej Bogdanov and Periklis A. Papakonstantinou and Andrew Wan},
title = {Pseudorandomness for ReadOnce Formulas},
booktitle = {Foundations of Computer Science (FOCS)},
year = {2011},
month = {October},
pages = {240246},
address = {Palm Springs, USA}
}How strong is Nisan's pseudorandom generator?
Matei David, Periklis A. Papakonstantinou, Anastasios Sidiropoulos
Information Procesing Letters (IPL), 2011
How strong is Nisan's pseudorandom generator?
Matei David, Periklis A. Papakonstantinou, Anastasios Sidiropoulos
Information Procesing Letters (IPL), 2011
We study the resilience of the classical pseudorandom generator (PRG) of Nisan [Nis92] against spacebounded machines that make multiple passes over the input. Our motivation comes from the derandomization of BPNC1. Observe that if for every logspace machine that reads its input n^O(1) times there is a PRG fooling this machine, then in particular we fool NC^1 circuits. Nisan’s PRG is known to fool logspace machines that read the input once. We ask what are the limits of this PRG regarding logspace machines that make multiple passes over the input. We show that for every constant k Nisan's PRG fools logspace machines that make (logn)^k passes over the input, using a seed of length (logn)^k', for some k'>k. We complement this result by showing that in general Nisan'sPRG cannot fool logspace machines that make n^O(1) passes even for a seed of length 2^Theta(sqrt(logn)). The observations made in this note outline a more general approach in understanding the difficulty of derandomizing BPNC^1.
@ARTICLE{DPS11,
author = {Matei David and Periklis A. Papakonstantinou and Anastastios Sidiropoulos},
title = {How strong is {N}isan's pseudorandom generator?},
journal = {Information Processing Letters (IPL)},
year = {2011},
volume = {111},
pages = {804808},
number = {16}
}Computationally Limited Randomness
Matei David, Phuong Nguyen, Periklis A. Papakonstantinou, Anastasios Sidiropoulos
Innovations in Theoretical Computer Science (ITCS), 2011
Computationally Limited Randomness
Matei David, Phuong Nguyen, Periklis A. Papakonstantinou, Anastasios Sidiropoulos
Innovations in Theoretical Computer Science (ITCS), 2011
The starting point of this work is the basic question of whether there exists a formal and meaningful way to limit the computational power that a time bounded randomized Turing Machine can employ on its randomness. We attack this question using a fascinating connection between space and time bounded machines given by Cook [Coo71]: a Turing Machine S running in space s with access to an unbounded stack is equivalent to a Turing Machine T running in time 2^O(s). We extend S with access to a readonly tape containing 2^O(s) uniform random bits, and a usual error regime: onesided or twosided, and bounded or unbounded. We study the effect of placing a bound p on the number of passes S is allowed on its random tape. It follows from Cook's results that:
As our first two contributions, we completely resolve the case of unbounded error. We show that we cannot meaningfully interpolate between deterministic and randomized T by increasing p:@INPROCEEDINGS{DNPS11,
author = {Matei David and Phuong Nguyen and Periklis A. Papakonstantinou and Anastastios Sidiropoulos},
title = {Computationally Limited Randomness},
booktitle = {Innovations in Computer Science (ICS)},
year = {2011},
month = {January},
pages = {173182},
address = {Beijing, P.R.China}
}Tradeoff Lower Bounds for Stack Machines
Matei David, Periklis A. Papakonstantinou,
Conference on Computational Complexity (CCC), 2010
journal version: Computational ComplexityTradeoff Lower Bounds for Stack Machines
Matei David, Periklis A. Papakonstantinou,
Conference on Computational Complexity (CCC), journal version: Computational Complexity, 2010
A space bounded Stack Machine is a regular Turing Machine with a readonly input tape, several space bounded readwrite work tapes, and an unbounded stack. Stack Machines with a logarithmic space bound have been connected to other classical models of computation, such as polynomial time Turing Machines (P) [Coo71] and polynomial size, polylogarithmic depth, bounded fanin circuits (NC) e.g., [BCD+89].
In this paper, we present significant new lower bounds and techniques for Stack Machines. This comes in the form of a tradeoff lower bound between space and number of passes over the input tape. Specifically, we give an explicit permuted inner product function such that any Stack Machine computing this function requires either Omega(N^{1/4}) space or Omega(N^{1/4}) number of passes for every constant ε > 0, where N is the input size. In the case of logarithmic space Stack Machines, this yields an unconditional ΩN1/4−ε lower bound for the number of passes. To put this result in perspective, we note that Stack Machines with logarithmic space and a single pass over the input can compute Parity, Majority, as well as certain languages outside NC. The latter follows from [All89], conditional on the widely believed complexity assumption that PSPACE \propersubset EXP.
Our technique is a novel communication complexity reduction, thereby extending the already wide range of models of computation for which communication complexity can be used to obtain lower bounds. Informally, we show that a kplayer NumberinHand (NIH) communication protocol for a base function f can efficiently simulate a space and pass bounded Stack Machine for a related function F, which consists of several “permuted” instances of f, bundled together by a combining function h. Tradeoff lower bounds for Stack Machines then follow from known communication complexity lower bounds.
The framework for this reduction was given by [BHN08], who used it to obtain similar tradeoff lower bounds for Turing Machines with a constant number of pass bounded external tapes. We also prove that the latter cannot efficiently simulate Stack Machines, conditional on the complexity assumption that E \not\subset PSPACE. It is the treatment of an unbounded stack which constitutes the main technical novelty in our communication complexity reduction.
@INPROCEEDINGS{DP10,
author = {Matei David and Periklis A. Papakonstantinou},
title = {Tradeoff Lower Bounds for Stack Machines},
booktitle = {Conference on Computational Complexity (CCC)},
year = {2010},
month = {June},
pages = {173182},
address = {Boston, USA},
}Cryptography and Randomness
 Abstract
 Abstract
 Abstract
 There exists a pseudorandom generator such that for every constant mu<1 and for an arbitrary polynomial time computable M_R\in {0,1}^{(1−mu)n x l(n)}, F is not oneway. Furthermore, our construction yields a tradeoff between the hardness of the pseudorandom generator and the output length m(n). For example, given =l(n) and a 2^cnhard pseudorandom generator we construct a 2^cnhard pseudorandom generator such that F is not oneway, where m(n)<=beta n and alpha+beta=1−o(1).
 We show this tradeoff to be tight for 11 pseudorandom generators. That is, for any G which is a 2^{alpha n}hard 11 pseudorandom generator, if alpha+beta=1+epsilon then there is M_R \in {0,1}^{beta n x l(n)} such that F is a (2^{epsilon n})hard oneway function.
 Abstract
 Abstract
True Randomness from Big Data
Periklis A. Papakonstantinou, David Woodruf , Guang Yang
Scientific Reports (to appear) (Nature's Scientific Reports, NPG), 2016
True Randomness from Big Data
Periklis A. Papakonstantinou, David Woodruf , Guang Yang
Scientific Reports (Nature's Scientific Reports, NPG), 2016
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on highquality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almostuniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Qualitywise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
to appear
Cryptography with Streaming Algorithms
Periklis A. Papakonstantinou, Guang Yang
CRYPTO, 2014
Cryptography with Streaming Algorithms
Periklis A. Papakonstantinou, Guang Yang
CRYPTO, 2014
We put forth the question of whether cryptography is fea sible using streaming devices. We give constructions and prove lower bounds. In streaming cryptography (not to be confused with stream ciphers) everything—the keys, the messages, and the seeds—are huge compared to the internal memory of the device. These streaming algorithms have small internal memory size and make a constant number of passes over big data maintained in a constant number of read/write external tapes. Typically, the internal memory size is $O(\log n)$ and we use 2 external tapes; whereas 1 tape is provably insufficient. In this set ting we cannot compute instances of popular intractability assumptions. Nevertheless, we base cryptography on these assumptions by employing nonblackbox techniques, and study its limitations.
We introduce new techniques to obtain unconditional lower bounds showing that no superlinear stretch pseudorandom generator exists, and no Public Key Encryption (PKE) exists with privatekeys of size sub linear in the plaintext length.
For possibility results, assuming the existence of oneway functions computable in $NC^1$—e.g. factoring, lattice assumptions—we obtain streaming algorithms computing oneway functions and pseudorandom generators. Given the Learning With Errors (LWE) assumption we con struct PKE where both the encryption and decryption are streaming algorithms. The starting point of our work is the groundbreaking work of ApplebaumIshaiKushilevitz on Cryptography in $NC^0$. In the end, our developments are technically orthogonal to their work; e.g. there is a PKE where the decryption is a streaming algorithm, whereas no PKE decryption can be in $NC^0$.
@INPROCEEDINGS{PY14,
author = {Periklis A. Papakonstantinou and Guang Yang},
title = {Cryptography with Streaming Algorithms},
booktitle = {CRYPTO},
year = {2014},
month = {August},
pages = {5570},
address = {Santa Barbara, USA}
}A remark on onewayness vs pseudorandomness
Periklis A. Papakonstantinou, Guang Yang
Computing and Combinatorics Conference (COCOON), 2012
A remark on onewayness vs pseudorandomness
Periklis A. Papakonstantinou, Guang Yang
Computing and Combinatorics Conference (COCOON), 2012
Every pseudorandom generator is in particular a oneway function. If we only consider part of the output of the pseudorandom generator is this still oneway? Here is a general setting formalizing this question. Suppose G:{0,1}^n>{0,1}^l(n) is a pseudorandom generator with stretch l(n). Let M_R\in {0,1}^{m(n) x l(n)} be a linear operator computable in polynomial time given randomness R. Consider the function F(xR)=(M_R G(x),R)
We obtain the following results.
@INPROCEEDINGS{PY12,
author = {Periklis A. Papakonstantinou and Guang Yang},
title = {A remark on onewayness vs pseudorandomness},
booktitle = {International Computing and Combinatorics Conference (COCOON)},
year = {2012},
month = {August},
pages = {482494},
address = {Sydney, Australia}
}Limits on the stretch of nonadaptive constructions of pseudorandom generators
Josh Bronson, Ali Juma, Periklis A. Papakonstantinou
Theory of Cryptography Conference (TCC), 2011
Limits on the stretch of nonadaptive constructions of pseudorandom generators
Josh Bronson, Ali Juma, Periklis A. Papakonstantinou
Theory of Cryptography Conference (TCC), 2011
The standard approach for constructing a largestretch pseudorandom generator given a oneway permutation or given a smallerstretch pseudorandom generator involves repeatedly composing the given primitive with itself. In this paper, we consider whether this approach is necessary, that is, whether there are constructions that do not involve composition. More formally, we consider blackbox constructions of pseudorandom generators from pseudorandom generators of smaller stretch or from oneway permutations, where the constructions make only nonadaptive queries to the given object. We consider three classes of such constructions, and for each class, we give a blackbox impossibility result that demonstrates a contrast between the stretch that can be achieved by adaptive and nonadaptive blackbox constructions.
We first consider constructions that make constantlymany nonadaptive queries to a given pseudorandom generator, where the seed length of the construction is at most O(logn) bits longer than the length n of each oracle query. We show that such constructions cannot achieve stretch that is even a single bit greater than the stretch of the given pseudorandom generator.
We then consider constructions with arbitrarily long seeds, but where oracle queries are collectively chosen in a manner that depends only on a portion of the seed whose length is at most O(logn) bits longer than the length n of each query. We show that such constructions making constantlymany nonadaptive queries cannot achieve stretch that is ω(logn) bits greater than the stretch of the given pseudorandom generator.
Finally, we consider a class of constructions motivated by streaming computation. Specifically, we consider constructions where the computation of each individual output bit depends only on the seed and on the response to a single query to a oneway permutation. We allow the seed to have a public portion that is arbitrarily long but must always be included in the output, and a nonpublic portion that is at most O(log n)
@INPROCEEDINGS{BJP11,
author = {Josh Bronson and Ali Juma and Periklis A. Papakonstantinou},
title = {Limits on the stretch of nonadaptive constructions of pseudorandom generators},
booktitle = {Theory of Cryptography Conference (TCC)},
year = {2011},
pages = {504521},
address = {Providence, Rhode Island, USA}
}On the impossibility of basing identity based encryption on trapdoor permutations
Dan Boneh, Periklis A. Papakonstantinou, Yevgeniy Vahlis, Charles W. Rackoff, Brent Waters
Foundations of Computer Science (FOCS), 2008
On the impossibility of basing identity based encryption on trapdoor permutations
Dan Boneh, Periklis A. Papakonstantinou, Yevgeniy Vahlis, Charles W. Rackoff, Brent Waters
Foundations of Computer Science (FOCS), 2008
We ask whether an Identity Based Encryption (IBE) system can be built from simpler publickey primitives. We show that there is no blackbox construction of IBE from Trapdoor Permutations (TDP) or even from Chosen Ciphertext Secure Public Key Encryption (CCAPKE). These blackbox separation results are based on an essential property of IBE, namely that an IBE system is able to compress exponentially many publickeys into a short public parameters string.
@INPROCEEDINGS{BPRVW08,
author = {Dan Boneh and Periklis A. Papakonstantinou and Charles W. Rackoff and Yevgeniy Vahlis and Brent Waters},
title = {On the Impossibility of basing Identity Based Encryption on Trapdoor Permutations},
booktitle = {Foundations of Computer Science (FOCS)},
year = {2008},
month = {October},
pages = {283292},
address = {Philadelphia, USA}
}Machine Learning on Big Data
 Abstract
 Abstract
On the Power and Limits of Distancebased Learning
Periklis A. Papakonstantinou, Jia Xu, Guang Yang
International Conference on Machine Learning (ICML), 2016
On the Power and Limits of Distancebased Learning
Periklis A. Papakonstantinou, Jia Xu, Guang Yang
International Conference on Machine Learning (ICML), 2016
We initiate the study of lowdistortion finite metric embeddings in multiclass (and multilabel) classification where (i) both the space of input instances and the space of output classes have combinatorial metric structure and (ii) the concepts we wish to learn are lowdistortion embeddings. We develop new geometric techniques and prove strong learning lower bounds. These provable limits hold even when we allow learners and classifiers to get advice by one or more experts. Our study overwhelmingly indicates that postgeometry assumptions are necessary in multiclass classification, as in natural language processing (NLP). Technically, the mathematical tools we developed in this work could be of independent interest to NLP. To the best of our knowledge, this is the first work which formally studies classification problems in combinatorial spaces. and where the concepts are lowdistortion embeddings.
@INPROCEEDINGS{PXY16,
author = {Periklis A. Papakonstantinou and Jia Xu and Guang Yang},
title = {On the Power and Limits of Distancebased Learning},
booktitle = {ICML},
year = {2016},
month = {June},
pages = {2263–2271},
address = {New York, NY, USA}
}Bagging by design (on the suboptimality of Bagging)
Periklis A. Papakonstantinou, Jia Xu, Zhu Cao
AAAI Conference on Artificial Intelligence (AAAI), 2014
Bagging by design (on the suboptimality of Bagging)
Periklis A. Papakonstantinou, Jia Xu, Zhu Cao
AAAI Conference on Artificial Intelligence (AAAI), 2014
Bagging (Breiman 1996) and its variants is one of the most popular methods in aggregating classifiers and regressors. Originally, its analysis assumed that the bootstraps are built from an unlimited, independent source of samples, therefore we call this form of bagging idealbagging. However in the real world, base predictors are trained on data subsampled from a limited number of training samples and thus they be have very differently. We analyze the effect of intersections between bootstraps, obtained by subsampling, to train different base predictors. Most importantly, we provide an alternative subsampling method called designbagging based on a new construction of combinatorial designs, and prove it universally better than bagging. Methodologically, we succeed at this level of generality because we compare the prediction accuracy of bagging and designbagging relative to the accuracy idealbagging. This finds potential applications in more involved baggingbased methods. Our analytical results are backed up by experiments on classification and regression settings.
SATsolving and SATalgorithms
 Abstract
 Abstract
 Abstract
 Abstract
Local Search for Hard SAT Formulas: The Strength of the Polynomial Law
Sixue Liu, Periklis A. Papakonstantinou
AAAI Conference on Artificial Intelligence (AAAI), 2016
Local Search for Hard SAT Formulas: The Strength of the Polynomial Law
Sixue Liu, Periklis A. Papakonstantinou
AAAI Conference on Artificial Intelligence (AAAI), 2016
Random kCNF formulas at the anticipated kSAT phasetransition point are prototypical hard kSAT instances. We develop a stochastic local search algorithm and study it both theoretically and through a largescale experimental study. The algorithm comes as a result of a systematic study that contrasts rates at which a certain measure concentration phenomenon occurs. This study yields a new stochastic rule for local search. A strong point of our contribution is the conceptual simplicity of our algorithm. More importantly, the empirical results overwhelmingly indicate that our algorithm outperforms the stateoftheart. This includes a number of winners and medalist solvers from the recent SAT Competitions.
@INPROCEEDINGS{LP16,
author = {Sixue Liu and Periklis A. Papakonstantinou},
title = {Local Search for Hard SAT Formulas: The Strength of the Polynomial Law},
booktitle = {AAAI},
year = {2016},
month = {February},
pages = {732738},
address = {Phoenix, AZ, USA}
}Width parameterized SAT: timespace tradeoffs
Eric Allender, Shiteng Chen, Tiancheng Lou, Periklis A. Papakonstantinou, Bangsheng Tang
Theory of Computing (ToC), 2014
Width parameterized SAT: timespace tradeoffs
Eric Allender, Shiteng Chen, Tiancheng Lou, Periklis A. Papakonstantinou, Bangsheng Tang
Theory of Computing (ToC), 2014
Alekhnovich and Razborov (2002) presented an algorithm that solves $SAT$ on instances $\phi$ of size $n$ and treewidth $TW(\phi)$, using time and space bounded by $2^{O(\tw(\phi))}n^{O(1)}$. Although several % followup works appeared over the last decade, the first open question of Alekhnovich and Razborov remained essentially unresolved: Can one check satisfiability of formulas with small treewidth in polynomial space and time as above? We essentially resolve this question, by (1) giving a polynomial space algorithm with a slightly worse runtime, (2) providing a complexitytheoretic characterization of bounded treewidth $SAT$, which strongly suggests that no polynomialspace algorithm can run significantly faster, and (3) presenting a spectrum of algorithms trading off time for space, between our PSPACE algorithm and the fastest known algorithm.
First, we give a simple algorithm that runs in polynomial space and achieves runtime $3^{TW(\phi)\log n}n^{O(1)}$, which approaches the runtime of Alekhnovich and Razborov (2002), but has an additional $\log n$ factor in the exponent. Then, we conjecture that this annoying $\log n$ factor is in general unavoidable.
Our negative results show our conjecture true if one believes a wellknown complexity assumption, which is the $SC\neq NC$ conjecture and its scaled variants. Technically, we base our result on the following lemma. For arbitrary $k$, $SAT$ of treewidth $\log^k n$ is complete for the class of problems computed by circuits of logarithmic depth, semiunbounded fanin and size $2^{O(\log^k n)}$ ($SAC^1$ when $k=1$). Problems in this class can be solved simultaneously in timespace $(2^{O(\log^{k+1}n)}, O(\log^{k+1}n))$, and also in ($2^{O(\log^k n)}$, $2^{O(\log^k n)}$). Then, we show that our conjecture (for $SAT$ instances with polylog treewidth) is equivalent to the question of whether the smallspace % simulation of semiunbounded circuit classes can be sped up without incurring a large space penalty. This is a recasting of the conjecture that $SAC^1$ (and even its subclass $NL$) is not contained in $SC$.
Although we cannot hope for an improvement asymptotically in the exponent of time and space, we introduce a new algorithmic technique which trades constants in the exponents: for each $\epsilon$ with $0< \epsilon <1$, we give an algorithm in timespace \[ \big( 3^{1.441(1\epsilon)TW(\phi)\log{\phi}}\phi^{O(1)},\; 2^{2\epsilon TW(\phi)}\phi^{O(1)} \big)\,. \] We systematically study the limitations of our technique for trading off time and space, and we show that our bounds are the best achievable using this technique.@article{ACLPT14,
author = {Eric Allender and Shiteng Chen and Tiancheng Lou and Periklis A. Papakonstantinou and Bangsheng Tang},
title = {WidthParametrized SAT: TimeSpace Tradeoffs},
year = {2014},
pages = {297339},
doi = {10.4086/toc.2014.v010a012},
publisher = {Theory of Computing},
journal = {Theory of Computing},
volume = {10},
number = {12},
URL = {http://www.theoryofcomputing.org/articles/v010a012},
}A note on widthparameterized SAT: an exact machine model characterization
Information Processing Letters (IPL), 2009
A note on widthparameterized SAT: an exact machine model characterization
Information Processing Letters (IPL), 2009
We characterize the complexity of SAT instances with pathdecompositions of width w(n). Although pathwidth is the most restrictive among the studied widthparameterizations of SAT, the most timeefficient algorithms known for such SAT instances run in time 2^Ω(w(n)), even when the pathdecomposition is given in the input. We wish to better understand the decision complexity of SAT instances of width ω(log n). We provide an exact correspondence between SATpw[w(n)], the problem of SAT instances with given path decomposition of width w(n), and NL[r(n)], the class of problems decided by logspace Turing Machines with at most r(n) passes over the nondeterministic tape. In particular, we show that SATpw [w(n)] is hard for NL[w(n)/logn] under logspace reductions. When NL[w(n)/logn] is closed under logspace reductions, which is the case for the most interesting w(n)’s, we show that SATpw[w(n)] is also complete.
@ARTICLE{P09b,
author = {Periklis A. Papakonstantinou},
title = {A note on widthparameterized SAT: an exact machine model characterization},
journal = {Information Processing Letters},
year = {2009},
volume = {110},
pages = {812},
number = {1}
}Complexity and algorithms for wellstructured kSAT instances
Konstantinos Georgiou, Periklis A. Papakonstantinou
(SAT), 2008
Complexity and algorithms for wellstructured kSAT instances
Konstantinos Georgiou, Periklis A. Papakonstantinou
(SAT), 2008
This paper consists of two conceptually related but independent parts. In the first part we initiate the study of kSAT instances of bounded diameter. The diameter of an ordered CNF formula is defined as the maximum difference between the index of the first and the last occurrence of a variable. We investigate the relation between the diameter of a formula and the treewidth and the pathwidth of its corresponding incidence graph. We show that under highly parallel and efficient transformations, diameter and pathwidth are equal up to a constant factor. Our main result is that the computational complexity of SAT, MaxSAT, #SAT grows smoothly with the diameter (as a function of the number of variables). Our focus is in providing space efficient and highly parallel algorithms, while the running time of our algorithms matches previously known results. Our results refer to any diameter, whereas for the special case where the diameter is O(logn) we show NLcompleteness of SAT and NC^2 algorithms for MaxSAT and #SAT.
In the second part we deal directly with kCNF formulas of bounded treewidth. We describe algorithms in an intuitive but notsostandard model of computation. Then we apply constructive theorems from computational complexity to obtain deterministic timeefficient and simultaneously spaceefficient algorithms for kSAT as asked by Alekhnovich and Razborov [1].
@INPROCEEDINGS{GP08,
author = {Kostantinos Georgiou and Periklis A. Papakonstantinou},
title = {Complexity and algorithms for wellstructured k{SAT} instances},
booktitle = {Theory and Applications of Satisfiability Testing  SAT 2008},
year = {2008},
pages = {105118},
address = {Guangzhou, China},
month = {May}
}Spacebounded Communication Complexity
 Abstract
 Abstract
Overlays and Limited Memory Communication Mode(l)s
Periklis A. Papakonstantinou, Dominik Scheder, Hao Song
Conference on Computational Complexity (CCC), 2014
Overlays and Limited Memory Communication Mode(l)s
Periklis A. Papakonstantinou, Dominik Scheder, Hao Song
Conference on Computational Complexity (CCC), 2014
We give new characterizations and lower bounds relating classes in the communication complexity polynomial hierarchy and circuit complexity to limited memory communication models.
We introduce the notion of rectangle overlay complexity of a function $f: \{0,1\}^n\times \{0,1\}^n\to\{0,1\}$. This is a natural combinatorial complexity measure in terms of combinatorial rectangles in the communication matrix of $f$. Furthermore, we consider memoryless and limitedmemory communication models, originally introduced in [11] with slightly different terminology. In these communication models there are two parameters of interest: (i) the message length or space, and (ii) the number of memory states. Specifically, these are oneway protocols which proceed in rounds. In each round, Alice sends one message of bounded length to Bob; receiving a message from Alice, Bob has to decide on the spot whether to output $0$ or $1$, or to continue the protocol. If he decides to continue, he immediately forgets Alice's message. In memoryless protocols, no memory is transferred between different rounds (but Bob still has ``space'' to hold Alice's messages within each round). We can make Bob more powerful by giving him some constant size memory, which he can update at the end of each round.P
We show that rectangle overlays completely characterize memoryless protocols. Then, we go on to show several connections to the communication complexity polynomial hierarchy defined by Babai, Frankl and Simon in 1986 [6]. This hierarchy has recently regained attention because its connection to the algebrization barrier in complexity theory [1]. We show that $P^{NP}$ is completely characterized by memoryless protocols with $\textrm{polylog}(n)$ space (message length), and thus it admits a purely combinatorial characterization in terms of rectangle overlays. If in addition Bob is given $3$ states of memory besides $\textrm{polylog}(n)$ space (message length), Alice and Bob can compute every level of $\Sigma_k^{cc}$ in the communication complexity hierarchy (for constant $k$), and also every function in $AC^0$. Furthermore, we show that a $5$state Bob with $\textrm{polylog}(n)$ space (message length) can compute exactly the functions in the communication class $PSPACE^{cc}$. This gives the first meaningful characterization of $PSPACE^{cc}$ in terms of space, originally defined in [6] without any notion of space. We also study equivalences and separations between our limited memory communication model and branching programs, and relations to circuit classes.
Spacebounded Communication Complexity
Joshua Brody, Shiteng Chen, Periklis A. Papakonstantinou, Hao Song, Xiaoming Sun
Innovations in Theoretical Computer Science (ITCS), 2013
Spacebounded Communication Complexity
Joshua Brody, Shiteng Chen, Periklis A. Papakonstantinou, Hao Song, Xiaoming Sun
Innovations in Theoretical Computer Science (ITCS), 2013
In the past thirty years, Communication Complexity has emerged as a foundational tool to proving lower bounds in many areas of computer science. Its power comes from its generality, but this generality comes at a priceno superlinear communication lower bound is possible, since a player may communicate his entire input. However, what if the players are limited in their ability to recall parts of their interaction?
We introduce memory models for 2party communication complexity. Our general model is as follows: two computationally unrestricted players, Alice and Bob, each have $s(n)$ bits of memory. When a player receives a bit of communication, he "compresses" his state. This compression may be an arbitrary function of his current memory contents, his input, and the bit of communication just received; the only restriction is that the compression must return at most $s(n)$ bits. We obtain memory hierarchy theorems (also comparing this general model with its restricted variants), and show superlinear lower bounds for some explicit (nonboolean) functions.
Our main conceptual and technical contribution concerns the following variant. The communication is oneway, from Alice to Bob, where Bob controls two types of memory: (i) a large, oblivious memory, where updates are only a function of the received bit and the current memory content, and (ii) a smaller, nonoblivious/general memory, where updates can be a function of the input given to Bob. We exhibit natural protocols where this semiobliviousness shows up. For this model we also introduce new techniques through which certain limitations of spacebounded computation are revealed. One of the main motivations of this work is in understanding the difference in the use of space when computing the following functions: Equality (EQ), Inner Product (IP), and connectivity in a directed graph (REACH). When viewed as communication problems, EQ can be decided using $0$ nonoblivious bits (and $\log_2 n$ oblivious bits), IP requires exactly $1$ nonoblivious bit, whereas for REACH we obtain the same lower bound as for IP and conjecture that the actual bound is $\Omega(\log^2 n)$. In fact, proving that $1$ nonoblivious bit is required becomes technically sophisticated, and the question even for $2$ nonoblivious bits for any explicit boolean function remains open.
@INPROCEEDINGS{BCPSS13,
author = {Joshua Brody and Shiteng Chen and Periklis A. Papakonstantinou and Hao Song and Xiaoming Sun},
title = {Spacebounded Communication Complexity},
booktitle = {Innovations in Theoretical Computer Science (ITCS)},
pages = {159172},
year = {2013}
}Restricted models of computation  theory of simple algorithms
 Abstract
 Abstract
 Abstract
Characterizing sets of jobs that admit optimal greedylike algorithms
Periklis A. Papakonstantinou, Charles Rackoff
Journal of Scheduling, 2010
Characterizing sets of jobs that admit optimal greedylike algorithms
Periklis A. Papakonstantinou, Charles Rackoff
Journal of Scheduling, 2010
The “ Priority Algorithm” is a model of computation introduced by Borodin, Nielsen and Rackoff [BNR03] which formulates a wide class of greedy algorithms. For an arbitrary set S of jobs, we are interested in whether or not there exists a priority algorithm that gains optimal profit on every subset of S. In the case where the jobs are all intervals, we characterize such sets S and give an efficient algorithm (when S is finite) for determining this. We show that in general, however, the problem is NPhard.
@ARTICLE{PR10,
author = {Periklis A. Papakonstantinou and Charles Rackoff},
title = {Characterizing sets of jobs that admit optimal greedylike algorithms},
journal = {Journal of Scheduling},
year = {2010},
volume = {13},
pages = {163176},
number = {2}
}On the structure of optimal greedy computation (for Job Scheduling)
Mathematical Foundations of Computer Science (MFCS), 2009
On the structure of optimal greedy computation (for Job Scheduling)
Mathematical Foundations of Computer Science (MFCS), 2009
We consider Priority Algorithm [BNR03] as a syntactic model of formulating the concept of greedy algorithm for Job Scheduling, and we study the computation of optimal priority algorithms. A Job Scheduling subproblem S is determined by a (possibly infinite) set of jobs, every finite subset of which potentially forms an input to a scheduling algorithm. An algorithm is optimal for S, if it gains optimal profit on every input. To the best of our knowledge there is no previous work about such arbitrary subproblems of Job Scheduling. For a finite S, it is coNPhard to decide whether S admits an optimal priority algorithm [PR07]. This indicates that meaningful characterizations of subproblems admitting optimal priority algorithms may not be possible. In this paper we consider those S that do admit optimal priority algorithms, and we show that the way in which all such algorithms compute has nontrivial and interesting structural features.
@INPROCEEDINGS{P09,
author = {Periklis A. Papakonstantinou},
title = {On the structure of optimal greedy computation (for Job Scheduling)},
booktitle = {Mathematical Foundations of Computer Science (MFCS)},
year = {2009},
month = {August},
pages = {612623},
address = {High Tartas, Slovakia}
}Hierarchies for classes of priority algorithms for Job Scheduling
Theoretical Computer Science, 2006
Hierarchies for classes of priority algorithms for Job Scheduling
Theoretical Computer Science, 2006
Priority algorithm is a model of computation capturing the notion of greedy and greedylike algorithm. This paper concerns priority algorithms for Job Scheduling. Three main restrictions are defined for priority algorithms, namely: memoryless, greedy and fixed. It was asked in [A. Borodin, M.N. Nielsen, C. Rackoff, (Incremental) priority algorithms, in: Proc. 13th Annu. Symp. Discrete Algorithms (SODA), January 2002, pp. 752–761 (also in Algorithmica 37(4) (2003) 295–326] whether the general class of priority algorithms is of different power from the restricted class of greedypriority algorithms (for the Job Scheduling problem). We answer this question affirmatively by showing that a specific priority algorithm cannot be simulated by any greedypriority algorithm on every input. Furthermore we systematically compare every two classes of priority algorithms for different settings of Job Scheduling. We also define a hierarchy for priority algorithms of bounded memory which lies between the class of memoryless and the class of priority algorithms with memory, and we show that this memory hierarchy is robust.
@ARTICLE{P06,
author = {Periklis A. Papakonstantinou},
title = {Hierarchies for classes of priority algorithms for Job Scheduling},
journal = {Theoretical Computer Science},
year = {2006},
month = {March},
volume = {352},
pages = {181189},
number = {13},
}Miscellanea
 Abstract
 Abstract
On the Complexity of Constructing Golomb Rulers
Christophe Meyer, Periklis A. Papakonstantinou
Discrete Applied Mathematics, 2009
On the Complexity of Constructing Golomb Rulers
Christophe Meyer, Periklis A. Papakonstantinou
Discrete Applied Mathematics, 2009
A Golomb Ruler is a ruler with integer marks where the distances between every two marks are distinct. Golomb Rulers find diverse applications in computer science and electrical engineering. According to our knowledge the computational complexity of problems related to the construction of Golomb Rulers is unknown. We provide natural definitions for problems related to the construction of such rulers. The main contribution of this work is NPcompleteness results for two such decision problems.
@ARTICLE{MP09,
author = {Christophe Meyer and Periklis A. Papakonstantinou},
title = {On the Complexity of Constructing Golomb Rulers},
journal = {Applied Discrete Mathematics},
year = {2009},
volume = {157},
pages = {738748},
number = {4}
}Bounded and Ordered Satisfiability: Connecting Recognition with Lambekstyle Calculi to Classical Satisfiability Testing
Michail Flouris, Lap Chi Lau, Tsuyoshi Morioka, Periklis A. Papakonstantinou, Gerald Penn
Mathematics of Language (MOL), 2003
Bounded and Ordered Satisfiability: Connecting Recognition with Lambekstyle Calculi to Classical Satisfiability Testing
Michail Flouris, Lap Chi Lau, Tsuyoshi Morioka, Periklis A. Papakonstantinou, Gerald Penn
Mathematics of Language (MOL), 2003
(this paper doesn't have an abstract)
@INPROCEEDINGS{FLMPP03,
author = {Michail Flouris and Lap Chi Lau and Tsuyoshi Morioka and Periklis A. Papakonstantinou and Gerald Penn},
title = {Bounded and Ordered Satisfiability: Connecting Recognition with Lambekstyle Calculi to Classical Satisfiability Testing},
booktitle = {Mathematics of Language (MoL)},
year = {2003},
month = {June},
pages = {4555},
address = {Indiana, USA}
}
(25 papers not counting twice journal & proceedings version)