Genetische Algorithmen und evolutionäre Berechnungen relators gelegentlich laden, dass Evolution als wissenschaftliche Theorie nutzlos ist, weil sie keine praktischen Vorteile und hat keine Relevanz für den täglichen Leben. Der Nachweis der Biologie allein zeigt jedoch, dass diese Behauptung unwahr ist. Es gibt zahlreiche natürliche Phänomene, für die die Evolution uns eine solide theoretische Untermauerung gibt. Um nur einen zu nennen, ist die beobachtete Entwicklung der Resistenz - gegen Insektizide bei Pflanzenschädlingen, für Antibiotika in Bakterien, für Chemotherapie in Krebszellen und für antiretrovirale Medikamente in Viren wie HIV - eine direkte Konsequenz der Gesetze der Mutation und Auswahl und Verständnis dieser Prinzipien hat uns geholfen, Strategien für den Umgang mit diesen Schadorganismen zu entwickeln. Das evolutionäre Postulat der gemeinsamen Abstammung hat die Entwicklung neuer Medikamente und Techniken unterstützt, indem den Forschern eine gute Vorstellung davon gegeben wird, welche Organismen sie experimentieren sollten, um Ergebnisse zu erzielen, die am ehesten für den Menschen relevant sind. Schließlich wurde das Prinzip der selektiven Zucht verwendet, um große Wirkung von Menschen, um kundenspezifische Organismen im Gegensatz zu allem, was in der Natur für ihren eigenen Nutzen zu schaffen. Das kanonische Beispiel sind natürlich die vielen Sorten domestizierter Hunde (Rassen, die so vielfältig sind wie Bulldoggen, Chihuahuas und Dackel aus Wölfen in nur wenigen tausend Jahren), aber weniger bekannte Beispiele sind kultivierter Mais (sehr unterschiedlich von Seine wilden Verwandten, von denen keines die vertrauten Ohren der von Menschen gezüchteten Mais hat), Goldfische (wie Hunde, wir haben Sorten gezüchtet, die sich dramatisch von den Wildtypen unterscheiden) und Milchkühe (mit riesigen Eutern weit größer als nötig Nur für nährende Nachkommen). Kritiker könnten behaupten, dass Kreationisten diese Dinge ohne Rückgriff auf die Evolution erklären können. Zum Beispiel erklären Kreationisten oft die Entwicklung der Resistenz gegenüber Antibiotika in Bakterien oder die Veränderungen bei domestizierten Tieren durch künstliche Selektion, indem sie davon ausgehen, dass Gott beschlossen hat, Organismen in festen Gruppen zu schaffen, die als Arten oder Baramin bezeichnet werden. Obwohl die natürliche Mikroevolution oder die vom Menschen geführte künstliche Selektion verschiedene Sorten innerhalb der ursprünglich erzeugten Hund-Art oder Kuh-Art oder Bakterien-Art () herbeiführen kann, kann keine Zeitspanne oder genetische Veränderung eine Art in eine andere verwandeln. Aber genau wie die Kreationisten bestimmen, was eine Art ist oder welcher Mechanismus die Lebewesen daran hindert, sich über ihre Grenzen hinaus zu entwickeln, wird sie niemals erklärt. Aber in den letzten Jahrzehnten hat der fortschreitende Fortschritt der modernen Technologie etwas Neues gebracht. Die Evolution produziert jetzt praktische Vorteile auf einem sehr unterschiedlichen Gebiet, und diesmal können die Kreationisten nicht behaupten, dass ihre Erklärung genau so passt. Dieses Feld ist Informatik, und die Vorteile kommen aus einer Programmiersprache genannte Algorithmen genannt. Dieser Aufsatz wird erklären, was genetische Algorithmen sind und wird zeigen, wie sie für die evolutioncreationism Debatte relevant sind. Was ist ein genetischer Algorithmus Genauer gesagt ist ein genetischer Algorithmus (oder kurz GA) eine Programmierungstechnik, die die biologische Evolution als Problemlösungsstrategie imitiert. Angesichts eines spezifischen Problems zu lösen, ist die Eingabe in die GA eine Reihe von potenziellen Lösungen für dieses Problem, codiert in irgendeiner Weise, und eine Metrik als Fitness-Funktion, die jeder Kandidat quantitativ bewertet werden kann. Diese Kandidaten können bereits bekannte Lösungen sein, mit dem Ziel, die GA zu verbessern, aber häufiger werden sie zufällig erzeugt. Die GA wertet dann jeden Kandidaten entsprechend der Fitnessfunktion aus. In einem Pool von zufällig generierten Kandidaten werden natürlich die meisten nicht funktionieren, und diese werden gelöscht. Nur durch Zufall können ein paar Versprechen halten - sie können auch nur schwache und unvollkommene Aktivitäten zur Lösung des Problems zeigen. Diese viel versprechenden Kandidaten werden gehalten und dürfen sich vermehren. Mehrfache Kopien werden von ihnen gemacht, aber die Kopien sind nicht vollkommen zufällige Änderungen werden während des Kopierens eingeführt. Diese digitalen Nachkommen gehen dann auf die nächste Generation, bilden einen neuen Pool von Kandidaten-Lösungen, und werden einer zweiten Runde der Fitness-Bewertung unterzogen. Jene Kandidatenlösungen, die durch die Veränderung ihres Kodexes verschlechtert oder besser gemacht wurden, werden wieder gestrichen, aber wieder zufällig können die zufälligen Variationen, die in die Bevölkerung eingeführt wurden, einige Individuen verbessert haben, wodurch sie besser, vollständiger oder mehr werden Effiziente Lösungen für das Problem. Wieder werden diese gewinnenden Personen ausgewählt und mit zufälligen Änderungen in die nächste Generation kopiert, und der Vorgang wiederholt sich. Die Erwartung ist, dass die durchschnittliche Fitness der Bevölkerung wird jede Runde zu erhöhen, und so durch die Wiederholung dieses Prozesses für Hunderte oder Tausende von Runden, können sehr gute Lösungen für das Problem entdeckt werden. Als erstaunlich und kontrainduktiv, wie es einigen scheint, haben sich genetische Algorithmen als eine enorm leistungsfähige und erfolgreiche Problemlösungsstrategie erwiesen, die die Macht evolutionärer Prinzipien dramatisch demonstriert. Genetische Algorithmen wurden in einer Vielzahl von Bereichen verwendet, um Lösungen für Probleme zu entwickeln, die schwieriger oder schwieriger sind als die, denen menschliche Konstrukteure gegenüberstehen. Darüber hinaus sind die Lösungen, die sie kommen mit oft effizienter, eleganter oder komplexer als alles vergleichbare ein menschlicher Ingenieur produzieren würde. In einigen Fällen haben genetische Algorithmen mit Lösungen, die die Programmierer, die die Algorithmen in den ersten Schritten verwirklicht haben Methoden der Repräsentation Bevor ein genetischer Algorithmus an jedes Problem eingesetzt werden kann, ist eine Methode erforderlich, um potenzielle Lösungen für dieses Problem zu kodieren In einer Form, die ein Computer verarbeiten kann. Ein gemeinsamer Ansatz besteht darin, Lösungen als binäre Strings zu codieren: Sequenzen von 1s und 0s, wobei die Ziffer an jeder Position den Wert eines Aspekts der Lösung darstellt. Ein anderer ähnlicher Ansatz besteht darin, Lösungen als Arrays von ganzen Zahlen oder Dezimalzahlen zu codieren, wobei jede Position wiederum einen bestimmten Aspekt der Lösung repräsentiert. Dieser Ansatz ermöglicht größere Genauigkeit und Komplexität als die vergleichsweise beschränkte Methode der Verwendung von binären Zahlen und ist oft intuitiv näher am Problemraum (Fleming und Purshouse 2002. S. 1228). Diese Technik wurde beispielsweise in der Arbeit von Steffen Schulze-Kremer verwendet, der einen genetischen Algorithmus schrieb, um die dreidimensionale Struktur eines Proteins basierend auf der Sequenz der darin enthaltenen Aminosäuren vorherzusagen (Mitchell 1996. S. 62) ). Schulze-Kremers GA benutzte reellwertige Zahlen, um die sogenannten Torsionswinkel zwischen den Aminosäuren verbindenden Peptidbindungen darzustellen. (Ein Protein besteht aus einer Sequenz von basischen Bausteinen, die Aminosäuren genannt werden, die wie die Bindungen in einer Kette miteinander verbunden sind. Sobald alle Aminosäuren miteinander verknüpft sind, faltet sich das Protein zu einer komplexen dreidimensionalen Form zusammen, auf deren Basis Aminosäuren ziehen sich gegenseitig an und die sich gegenseitig abstoßen Die Form eines Proteins bestimmt seine Funktion. Genetische Algorithmen zum Trainieren neuronaler Netzwerke verwenden oft auch diese Methode der Codierung. Ein dritter Ansatz ist, Einzelpersonen in einer GA als Zeichenfolgen darzustellen, wobei jeder Buchstabe wieder für einen spezifischen Aspekt der Lösung steht. Ein Beispiel für diese Technik ist der grammatische Kodierungsansatz von Hiroaki Kitanos, bei dem ein GA auf die Aufgabe gestellt wurde, einen einfachen Satz von Regeln zu entwickeln, der als kontextfreie Grammatik bezeichnet wird, die wiederum dazu verwendet wurde, neuronale Netzwerke für eine Vielzahl von Problemen zu erzeugen (Mitchell 1996) S. 74). Die Tugend aller drei Methoden besteht darin, dass sie es einfach machen, Operatoren zu definieren, die zufällige Änderungen in den ausgewählten Kandidaten hervorrufen: eine 0 zu einer 1 oder umgekehrt, eine Zahl zufällig aus dem Wert einer Zahl hinzufügen oder subtrahieren Betrag oder ändern Sie einen Buchstaben zum anderen. Eine weitere Strategie, die hauptsächlich von John Koza von der Stanford University entwickelt wurde und genetische Programmierung nennt. Repräsentiert Programme als Verzweigungsdatenstrukturen, die als Bäume bezeichnet werden (Koza et al., 2003, S. 35). Bei diesem Ansatz können zufällige Änderungen durch Ändern des Operators oder Ändern des Wertes an einem gegebenen Knoten in dem Baum oder Ersetzen eines Teilbaums durch einen anderen bewirkt werden. Abbildung 1: Drei einfache Programm-Bäume wie sie normalerweise in der genetischen Programmierung verwendet werden. Der mathematische Ausdruck, den jeder repräsentiert, ist unten angegeben. Es ist wichtig zu beachten, dass evolutionäre Algorithmen keine Kandidatenlösungen als Datenketten mit fester Länge darstellen müssen. Manche repräsentieren sie auf diese Weise, andere jedoch nicht, kann Kitanos grammatikalische Kodierung, die oben diskutiert wurde, effizient skaliert werden, um große und komplexe neuronale Netzwerke zu schaffen, und Kozas genetische Programmierbäume können beliebig groß wachsen, wie notwendig, um zu lösen, was auch immer sie angewendet werden nach. Methoden der Auswahl Es gibt viele verschiedene Techniken, die ein genetischer Algorithmus verwenden kann, um die Personen auszuwählen, die in die nächste Generation kopiert werden sollen, aber unten aufgeführt sind einige der gängigsten Methoden. Einige dieser Methoden schließen sich gegenseitig aus, aber andere können und werden oft in Kombination verwendet. Elitäre Auswahl. Die passendsten Mitglieder jeder Generation sind garantiert ausgewählt werden. (Die meisten GAs verwenden keinen reinen Elitismus, sondern verwenden stattdessen eine modifizierte Form, bei der das beste oder einige der besten Individuen jeder Generation in die nächste Generation kopiert werden, falls nichts besser auftaucht.) Fitness-proportionale Auswahl . Mehr Fit Individuen sind eher, aber nicht sicher, ausgewählt werden. Roulette-Rad Auswahl. Eine Form der Fitness-proportionale Auswahl, bei der die Chance eines Einzelnen ausgewählt wird, ist proportional zu dem Betrag, durch den seine Fitness ist größer oder weniger als seine Konkurrenten Fitness. (Konzeptionell kann dies als Roulette-Spiel dargestellt werden - jeder einzelne bekommt eine Scheibe des Rades, aber mehr passende bekommen größere Scheiben als weniger passende. Das Rad wird dann gesponnen, und welche Person besitzt den Abschnitt, auf dem es landet Jedes Mal gewählt.) Skalierung Auswahl. Wenn die durchschnittliche Fitness der Bevölkerung ansteigt, nimmt auch die Stärke des selektiven Drucks zu und die Fitness-Funktion wird mehr diskriminierend. Diese Methode kann hilfreich sein, um die beste Auswahl später, wenn alle Personen haben relativ hohe Fitness und nur geringe Unterschiede in der Fitness unterscheiden eine von anderen. Turnierauswahl. Untergruppen von Individuen werden aus der größeren Bevölkerung ausgewählt, und Mitglieder jeder Untergruppe konkurrieren gegeneinander. Es wird nur ein Individuum aus jeder Untergruppe ausgewählt, um es zu reproduzieren. Rangauswahl. Jedem Individuum in der Bevölkerung wird ein numerischer Rang basierend auf Fitness zugewiesen, und die Auswahl basiert auf dieser Rangfolge und nicht auf absoluten Unterschiede in der Fitness. Der Vorteil dieser Methode ist, dass sie verhindern kann, dass sehr fit Individuen vorher auf Kosten von weniger Fit eine Dominanz erlangen, was die Populationen der genetischen Vielfalt verringern und die Versuche, eine akzeptable Lösung zu finden, behindern könnte. Generationsauswahl. Die Nachkommen der Individuen aus jeder Generation werden die gesamte nächste Generation. Zwischen den Generationen bleiben keine Individuen erhalten. Steady-State Auswahl. Die Nachkommen der Individuen, die aus jeder Generation ausgewählt werden, gehen zurück in den vorbestehenden Genpool und ersetzen einige der weniger passenden Mitglieder der vorherigen Generation. Manche Individuen bleiben zwischen den Generationen erhalten. Hierarchische Auswahl. Einzelpersonen gehen durch mehrere Runden der Auswahl jeder Generation. Untere Auswertungen sind schneller und weniger diskriminierend, während diejenigen, die auf höheren Ebenen überleben, strenger ausgewertet werden. Der Vorteil dieser Methode ist, dass sie die Gesamtrechenzeit verringert, indem sie eine schnellere, weniger selektive Evaluierung verwendet, um die Mehrheit der Personen, die wenig oder gar kein Versprechen aufweisen, auszusondern und nur diejenigen, die diesen ersten Test überleben, rigoroseren und rechnerisch teureren Fitness unterzuordnen Auswertung. Methoden der Veränderung Sobald Auswahl passte Individuen gewählt hat, müssen sie zufällig geändert werden in der Hoffnung, ihre Fitness für die nächste Generation zu verbessern. Es gibt zwei grundlegende Strategien, um dies zu erreichen. Das erste und einfachste nennt man Mutation. So wie Mutation in lebenden Sachen ein Gen zum anderen ändert, so verursacht Mutation in einem genetischen Algorithmus kleine Änderungen an einzelnen Punkten in einem Einzelteilcode. Die zweite Methode heißt crossover. Und beinhaltet die Auswahl von zwei Personen zu tauschen Segmente ihrer Code, die Herstellung künstlicher Nachkommen, die Kombinationen ihrer Eltern sind. Dieses Verfahren soll den analogen Prozess der Rekombination simulieren, der Chromosomen während der sexuellen Reproduktion auftritt. Häufige Formen von Crossover umfassen Single-Point-Crossover. In denen ein Austauschort an einem zufälligen Ort in den beiden Individuen Genomen gesetzt wird und ein Individuum trägt alle seinen Code von vor diesem Punkt und der andere trägt alle seine Code von nach diesem Punkt, um eine Nachkommenschaft und einheitliche Crossover zu produzieren. Wobei der Wert an irgendeinem gegebenen Ort im Nachkommen-Genom entweder der Wert eines Eltern-Genoms an diesem Ort oder der Wert des anderen Eltern-Genoms an diesem Ort ist, der mit 5050 Wahrscheinlichkeit gewählt wird. Abbildung 2: Crossover und Mutation. Die obigen Diagramme illustrieren die Wirkung von jedem dieser genetischen Operatoren auf Personen in einer Population von 8-Bit-Strings. Das obere Diagramm zeigt, dass zwei Einzelpersonen, die eine Einzelpunktüberkreuzung durchmachen, der Austauschpunkt zwischen der fünften und sechsten Position im Genom gesetzt wird, wodurch ein neues Individuum erzeugt wird, das ein Hybrid seiner Vorläufer ist. Das zweite Diagramm zeigt ein Individuum, das eine Mutation an Position 4 durchläuft und die 0 an dieser Position in seinem Genom verändert. 1. Andere Problemlösungsverfahren mit dem Anstieg des künstlichen Lebensrechnens und der Entwicklung von heuristischen Methoden und anderen computergestützten Problemlösungsverfahren Die in gewisser Weise den genetischen Algorithmen ähnlich sind. Dieser Abschnitt erklärt einige dieser Techniken, in welcher Weise sie GAs ähneln und in welcher Weise sie sich unterscheiden. Neuronale Netze Ein neuronales Netzwerk oder kurz neuronales Netz ist ein Problemlösungsverfahren, das auf einem Computermodell beruht, wie Neuronen im Gehirn verbunden sind. Ein neuronales Netzwerk besteht aus Schichten von Verarbeitungseinheiten, die Knoten genannt werden, die durch Richtungsverbindungen verbunden sind: eine Eingabeschicht, eine Ausgabeschicht und null oder mehr versteckte Schichten dazwischen. Ein Eingangsmuster der Eingabe wird der Eingangsschicht des neuronalen Netzes präsentiert und Knoten, die stimuliert werden, übertragen dann ein Signal zu den Knoten der nächsten Schicht, mit der sie verbunden sind. Wenn die Summe aller Eingaben, die in eines dieser virtuellen Neuronen eintreten, höher ist als die Neuronen, so genannte Aktivierungsschwelle. Dass das Neuron selbst aktiviert wird, und übergibt sein eigenes Signal an die Neuronen in der nächsten Schicht. Das Aktivierungsmuster breitet sich also nach vorne aus, bis es die Ausgangsschicht erreicht und dort als Lösung für die dargestellte Eingabe zurückgegeben wird. So wie im Nervensystem der biologischen Organismen lernen neuronale Netze ihre Leistung im Laufe der Zeit durch wiederholte Runden des Anpassens ihrer Schwellenwerte und deren Feinabstimmung, bis die tatsächliche Ausgabe mit dem gewünschten Ausgang für jeden gegebenen Eingang übereinstimmt. Dieser Prozess kann von einem menschlichen Experimentator überwacht oder automatisch mit einem Lernalgorithmus ausgeführt werden (Mitchell 1996, S. 52). Genetische Algorithmen wurden sowohl zum Aufbau als auch zum Trainieren neuronaler Netze verwendet. Abbildung 3: Ein einfaches Vorwärtsneurennetzwerk mit einer Eingangsschicht aus vier Neuronen, einer verborgenen Schicht aus drei Neuronen und einer Ausgangsschicht aus vier Neuronen. Die Zahl auf jedem Neuron stellt seinen Aktivierungsschwellenwert dar; er schießt nur, wenn er mindestens die vielen Eingänge empfängt. Das Diagramm zeigt, dass das neuronale Netzwerk mit einem Eingabestring dargestellt wird und zeigt, wie sich die Aktivierung vorwärts durch das Netzwerk ausbreitet, um eine Ausgabe zu erzeugen. Hill-climbing Ähnlich wie genetische Algorithmen, obwohl systematisch und weniger zufällig, beginnt ein Hügel-Klettern Algorithmus mit einer ersten Lösung für das Problem zur Hand, die in der Regel zufällig gewählt. Der String wird dann mutiert, und wenn die Mutation zu einer höheren Eignung für die neue Lösung führt als für die vorherige, wird die neue Lösung anders gehalten, die aktuelle Lösung wird beibehalten. Der Algorithmus wird dann so lange wiederholt, bis keine Mutation gefunden werden kann, die eine Zunahme der aktuellen Eignung der Lösung bewirkt, und diese Lösung wird als Ergebnis zurückgeführt (Koza et al., 2003, S. 59). (Um zu verstehen, woher der Name dieser Technik kommt, stellen Sie sich vor, dass der Raum aller möglichen Lösungen für ein gegebenes Problem als eine dreidimensionale Konturlandschaft dargestellt wird. Ein gegebener Satz von Koordinaten auf dieser Landschaft stellt eine bestimmte Lösung dar Sind Höheren höher, Höhen zu bilden und Höhen zu bilden, die schlechter sind, sind niedriger in der Höhe und bilden Täler. Ein Bergkletterer ist dann ein Algorithmus, der an einem gegebenen Punkt auf der Landschaft beginnt und unaufhaltsam bergauf fährt.) Bergsteigen Ist, was als gieriger Algorithmus bekannt ist. Dass es immer die beste Wahl bei jedem Schritt in der Hoffnung, dass die insgesamt beste Ergebnis kann auf diese Weise erreicht werden. Im Gegensatz dazu sind Methoden wie genetische Algorithmen und simulierte Glühungen, die unten diskutiert werden, nicht gierig diese Methoden manchmal machen suboptimale Entscheidungen in der Hoffnung, dass sie zu besseren Lösungen später führen wird. Simulated Annealing Eine ähnliche Optimierungsmethode wie evolutionäre Algorithmen wird als simuliertes Glühen bezeichnet. Die Idee entlehnt ihren Namen von dem industriellen Glühen, bei dem ein Material über einen kritischen Punkt erhitzt wird, um es zu erweichen, dann allmählich abgekühlt, um Defekte in seiner kristallinen Struktur zu löschen, wodurch eine stabilere und regelmßigere Gitteranordnung von Atomen erzeugt wird ( Haupt und Haupt 1998. S. 16). Im simulierten Glühen, wie bei genetischen Algorithmen, gibt es eine Fitness-Funktion, die eine Fitness-Landschaft aber, anstatt eine Population von Kandidaten wie in GAs definiert, gibt es nur eine Kandidatenlösung. Simulated Annealing fügt auch das Konzept der Temperatur, eine globale numerische Menge, die allmählich im Laufe der Zeit sinkt. Bei jedem Schritt des Algorithmus mutiert die Lösung (was dem Bewegen zu einem benachbarten Punkt der Fitnesslandschaft gleichkommt). Die Fitness der neuen Lösung wird dann mit der Eignung der bisherigen Lösung verglichen, wenn sie höher ist, wird die neue Lösung beibehalten. Andernfalls entscheidet der Algorithmus, ob er auf der Grundlage der Temperatur gehalten oder verworfen werden soll. Wenn die Temperatur anfänglich hoch ist, können selbst Änderungen, die eine signifikante Abnahme der Fitness bewirken, beibehalten und als Grundlage für die nächste Runde des Algorithmus verwendet werden. Wenn jedoch die Temperatur abnimmt, wird der Algorithmus mehr und mehr geneigt, nur zu akzeptieren Fitness-zunehmende Veränderungen. Schließlich erreicht die Temperatur Null und das System friert ein, was die Konfiguration an dieser Stelle ist, wird die Lösung. Ein simuliertes Glühen wird häufig für Konstruktionsanwendungen wie die Bestimmung des physikalischen Layouts von Komponenten auf einem Computerchip (Kirkpatrick, Gelatt und Vecchi 1983) verwendet. Eine kurze Geschichte der GAs Die frühesten Fälle von dem, was man heute genetische Algorithmen nennen könnte, erschienen in den späten 1950er und frühen 1960er Jahren, die auf Computern von evolutionären Biologen programmiert wurden, die explizit danach strebten, Aspekte der natürlichen Evolution zu modellieren. Es kam keinem von ihnen vor, dass diese Strategie allgemeiner auf künstliche Probleme anwendbar sein könnte, aber diese Anerkennung war nicht lange in Kommen: Evolutionäre Berechnung war definitiv in der Luft in den formativen Tagen des elektronischen Computers (Mitchell 1996. p. 2). Bis 1962 haben Forscher wie G. E.P. Box, G. J. Friedman, W. W. Bledsoe und H. J. Bremermann hatten alle unabhängig entwickelte Evolutions-inspirierte Algorithmen für Funktionsoptimierung und maschinelles Lernen, aber ihre Arbeit zog wenig Follow-up. Eine erfolgreichere Entwicklung in diesem Bereich kam 1965, als Ingo Rechenberg, damals von der Technischen Universität Berlin, eine Technik einführte, die er Evolutionsstrategie nannte. Obwohl er den Bergklettern ähnlicher war als den genetischen Algorithmen. In dieser Technik gab es keine Population oder Crossover ein Elternteil wurde mutiert, um einen Nachkommen zu produzieren, und der bessere von den beiden wurde beibehalten und wurde die Muttergesellschaft für die nächste Mutationsrunde (Haupt und Haupt 1998. S.146). Spätere Versionen führten die Idee einer Bevölkerung ein. Entwicklungsstrategien werden heute noch von Ingenieuren und Wissenschaftlern, vor allem in Deutschland, eingesetzt. Die nächste wichtige Entwicklung auf dem Gebiet kam 1966, als L. J. Fogel, A. J. Owens und M. J. Walsh führten in Amerika eine Technik ein, die sie evolutionäre Programmierung nannten. In dieser Methode wurden Kandidatenlösungen zu Problemen als einfache Finite-State-Maschinen wie die Rechenbergs-Evolutionsstrategie dargestellt, deren Algorithmus durch zufällige Mutation einer dieser simulierten Maschi - nen bearbeitet und den besseren der beiden beibehalten wurde (Mitchell 1996. p.2 Goldberg 1989. p .105). Ebenso wie Evolutionsstrategien ist eine breitere Formulierung der evolutionären Programmiertechnik auch heute noch ein Forschungsgebiet. Was bei beiden Methoden noch fehlte, war die Anerkennung der Bedeutung von Crossover. Bereits 1962 legte John Hollands an adaptiven Systemen den Grundstein für spätere Entwicklungen, vor allem Holland war auch der erste, der ausdrücklich Crossover - und andere Rekombinationsoperatoren vorschlug. Die grundlegenden Arbeiten auf dem Gebiet der genetischen Algorithmen kamen jedoch 1975 mit der Veröffentlichung des Buches Adaptation in Natural and Artificial Systems. Aufbauend auf früheren Forschungen und Beiträgen sowohl von Holland selbst als auch von Kollegen an der University of Michigan, war dieses Buch das erste, das systematisch und rigoros das Konzept der adaptiven digitalen Systeme unter Verwendung von Mutation, Selektion und Crossover vorstellt und dabei Prozesse der biologischen Evolution simuliert Problemlösungsstrategie. Das Buch versuchte auch, genetische Algorithmen auf eine feste theoretische Grundlage zu stellen, indem er den Begriff der Schemata einführte (Mitchell 1996. S. 3 Haupt und Haupt 1998. S.147). Im selben Jahr stellte Kenneth De Jongs wichtige Dissertation das Potenzial von GAs auf, indem sie zeigte, dass sie auf einer Vielzahl von Testfunktionen, einschließlich lärmender, diskontinuierlicher und multimodaler Suchlandschaften, gut abschneiden können (Goldberg 1989, S.107). Diese Grundlagen erarbeiteten ein breiteres Interesse an der evolutionären Berechnung. Von den frühen bis Mitte der 1980er Jahre wurden genetische Algorithmen auf ein breites Spektrum von Themen angewandt, von abstrakten mathematischen Problemen wie Bin-Packing und Graphenfärbung bis hin zu konkreten Engineering-Themen wie Pipeline Flow Control, Mustererkennung und Klassifizierung sowie strukturelle Optimierung ( Goldberg 1989, S. 128). Zuerst waren diese Anwendungen hauptsächlich theoretisch. Im Zuge der fortschreitenden Proliferation gingen jedoch genetische Algorithmen in den kommerziellen Sektor über, deren Anstieg durch das exponentielle Wachstum der Rechenleistung und die Entwicklung des Internets angeregt wurde. Heute ist die evolutionäre Berechnung ein blühendes Feld, und genetische Algorithmen lösen Probleme des alltäglichen Interesses (Haupt und Haupt 1998. S.147) in den verschiedensten Bereichen der Börsenvorhersage und der Portfolioplanung, der Luft - und Raumfahrttechnik, dem Mikrochipdesign, der Biochemie und Molekularbiologie und Scheduling auf Flughäfen und Montagelinien. Die Kraft der Evolution hat praktisch jedes Feld berührt, das man zu nennen pflegt, die Welt um uns herum unsichtbar in unzähliger Weise gestaltet, und neue Verwendungen werden weiterhin entdeckt, während die Forschung im Gange ist. Und das Herz aller Dinge ist nichts anderes als Charles Darwins einfache, kraftvolle Einsicht: die zufällige Chance der Variation, gepaart mit dem Gesetz der Selektion, ist eine Problemlösungsmethode von ungeheurer Kraft und nahezu unbegrenzter Anwendung. Was sind die Stärken von GAs Der erste und wichtigste Punkt ist, dass genetische Algorithmen sind inhärent parallel. Die meisten anderen Algorithmen sind seriell und können nur den Lösungsraum zu einem Problem in einer Richtung auf einmal erforschen, und wenn die Lösung, die sie entdecken, sich als suboptimal erweist, gibt es nichts zu tun, sondern alle vorher beendeten Arbeiten aufzugeben und neu zu beginnen. Da GAs jedoch mehrere Nachkommen haben, können sie den Lösungsraum in mehreren Richtungen gleichzeitig erkunden. Wenn ein Weg sich als Sackgasse entpuppt, können sie ihn leicht eliminieren und die Arbeit an vielversprechenden Möglichkeiten fortsetzen, was ihnen eine größere Chance bei jedem Versuch bietet, die optimale Lösung zu finden. Der Vorteil der Parallelität geht jedoch darüber hinaus. Betrachten Sie folgendes: Alle 8-stelligen binären Strings (Strings von 0s und 1s) bilden einen Suchraum, der als (wobei für 0 oder 1 steht) dargestellt werden kann. Der String 01101010 ist ein Mitglied dieses Raumes. Es ist aber auch ein Teil des Raumes 0, der Raum 01, der Raum 00, der Raum 0111, der Raum 01010 und so weiter. Durch die Bewertung der Eignung dieses einen Strings würde ein genetischer Algorithmus jeden dieser vielen Räume entnehmen, zu denen er gehört. Bei vielen dieser Evaluierungen würde es einen immer genaueren Wert für die durchschnittliche Fitness jedes dieser Räume aufbauen, von denen jeder viele Mitglieder hat. Daher ist eine GA, die explizit bewertet eine kleine Zahl von Einzelpersonen implizit bewertet eine viel größere Gruppe von Einzelpersonen - so wie eine Umfrage, die Fragen eines bestimmten Mitglieds einer ethnischen, religiösen oder sozialen Gruppe hofft, etwas über die Meinungen aller lernen Mitglieder dieser Gruppe und können daher die nationale Meinung zuverlässig vorhersagen, während sie nur einen kleinen Prozentsatz der Bevölkerung untersuchen. In der gleichen Weise kann die GA nach Hause auf auf dem Platz mit den höchsten Fitness-Individuen und finden Sie die insgesamt besten aus dieser Gruppe. Im Rahmen evolutionärer Algorithmen wird dies als Schema-Theorem bezeichnet und ist der zentrale Vorteil eines GA gegenüber anderen Problemlösungsmethoden (Holland 1992. S. 68 Mitchell 1996. S.28-29 Goldberg 1989. S.20 ). Aufgrund der Parallelität, die es ihnen ermöglicht, implizit viele Schemata gleichzeitig zu bewerten, eignen sich genetische Algorithmen besonders gut zur Lösung von Problemen, bei denen der Raum aller potentiellen Lösungen wirklich riesig ist - zu umfangreich, um in einer angemessenen Zeitspanne erschöpfend zu suchen. Die meisten Probleme, die in diese Kategorie fallen, werden als nichtlinear bezeichnet. In einem linearen Problem ist die Eignung jeder Komponente unabhängig, so daß jede Verbesserung zu irgendeinem Teil zu einer Verbesserung des Systems als Ganzes führt. Unnötig zu sagen, sind einige real-world Probleme so. Nichtlinearität ist die Norm, wo das Ändern einer Komponente Welligkeitseffekte auf das gesamte System haben kann und wo mehrere Veränderungen, die einzeln schädlich sind, zu viel größeren Verbesserungen der Fitness führen können, wenn sie kombiniert werden. Nichtlinearität führt zu einer kombinatorischen Explosion: Der Raum von 1.000-stelligen binären Strings kann erschöpfend durchsucht werden, indem nur 2.000 Möglichkeiten ausgewertet werden, wenn das Problem linear ist, während es bei nichtlinearer Auswertung 2 1000 Möglichkeiten auswerten muss Über 300 Ziffern, um in vollem Umfang zu schreiben. Glücklicherweise erlaubt es die implizite Parallelität eines GA, auch diese enorme Anzahl von Möglichkeiten zu überwinden, erfolgreich zu finden, optimale oder sehr gute Ergebnisse in kurzer Zeit nach der direkten Entnahme von nur kleinen Regionen des Ganzen Fitness-Landschaft (Forrest 1993. S. 877). Zum Beispiel produzierte ein genetischer Algorithmus, der gemeinsam von Ingenieuren von General Electric und Rensselaer Polytechnic Institute entwickelt wurde, ein leistungsstarkes Jet-Engine-Turbinen-Design, das dreimal besser war als eine von Menschen entworfene Konfiguration und 50 besser als eine Konfiguration, die von einem Expertensystem erfolgreich entwickelt wurde Navigieren in einem Lösungsraum mit mehr als 10 387 Möglichkeiten. Konventionelle Verfahren zum Entwerfen solcher Turbinen sind ein zentraler Bestandteil von Ingenieurprojekten, die bis zu fünf Jahre in Anspruch nehmen und über zwei Milliarden kosten können. Der genetische Algorithmus entdeckte diese Lösung nach zwei Tagen auf einem typischen Desktop-Arbeitsplatz (Holland 1992. S.72). Eine weitere bemerkenswerte Stärke der genetischen Algorithmen ist, dass sie bei Problemen, für die die Fitnesslandschaft komplex ist, gut funktionieren - diejenigen, bei denen die Fitnessfunktion diskontinuierlich, verrauscht, zeitveränderlich ist oder viele lokale Optima hat. Die meisten praktischen Probleme haben einen großen Lösungsraum, unmöglich, die Herausforderung erschöpfend zu durchsuchen, dann wird, wie die lokalen Optima - Lösungen, die besser als alle anderen, die ihnen ähnlich sind, zu vermeiden, aber das sind nicht so gut wie anderswo anderswo in der Lösungsraum. Viele Suchalgorithmen können durch lokale Optima gefangen werden: Wenn sie die Spitze eines Hügels auf der Fitnesslandschaft erreichen, werden sie feststellen, dass keine besseren Lösungen in der Nähe existieren und daraus schließen, dass sie das Beste erreicht haben, obwohl höhere Spitzen an anderer Stelle auf der Die Entwicklung von Algorithmen hat sich dagegen bewährt, lokale Optima zu entgehen und das globale Optimum auch in einer sehr robusten und komplexen Fitnesslandschaft zu entdecken. (Es ist anzumerken, dass es in Wirklichkeit in der Regel keine Möglichkeit gibt, festzustellen, ob eine gegebene Lösung eines Problems das einzige globale Optimum oder nur ein sehr hohes lokales Optimum ist, selbst wenn ein GA nicht immer ein nachweislich perfektes Ergebnis liefert Lösung für ein Problem kann es fast immer mindestens eine sehr gute Lösung liefern.) Alle vier GAs-Hauptkomponenten - Parallelität, Selektion, Mutation und Crossover - arbeiten zusammen, um dies zu erreichen. Am Anfang erzeugt die GA eine abwechslungsreiche Ausgangspopulation, die ein Netz über die Fitnesslandschaft wirft. (Koza (2003), S. 506) vergleicht dies mit einer Armee von Fallschirmspringern, die auf die Landschaft eines Problemsuchraums fallen, wobei jedem Befehle der höchste Peak zugewiesen wird.) Kleine Mutationen ermöglichen es jedem Einzelnen, seine unmittelbare Nachbarschaft zu erforschen, Während die Selektion den Fortschritt fokussiert, indem sie die Algorithmen nach oben aufwärts zu vielversprechenderen Teilen des Lösungsraums führt (Holland 1992, S. 68). Allerdings ist Crossover das Schlüsselelement, das genetische Algorithmen von anderen Methoden wie Hügelklettern und simuliertem Glühen unterscheidet. Ohne Crossover ist jede einzelne Lösung eigenständig und erforscht den Suchraum in seiner unmittelbaren Nähe ohne Bezug zu dem, was andere Personen entdeckt haben könnten. Allerdings gibt es bei der Übergangssituation einen Transfer von Informationen zwischen erfolgreichen Kandidaten - Einzelpersonen können davon profitieren, was andere gelernt haben, und Schemata können gemischt und kombiniert werden, mit dem Potential, einen Nachwuchs zu produzieren, der die Stärken beider Eltern hat Die Schwächen von keinem. Dieser Punkt wird in Koza et al. 1999. S.486, wo die Autoren diskutieren ein Problem der Synthese eines Tiefpass-Filter mit genetischen Programmierung. In one generation, two parent circuits were selected to undergo crossover one parent had good topology (components such as inductors and capacitors in the right places) but bad sizing (values of inductance and capacitance for its components that were far too low). The other parent had bad topology, but good sizing. The result of mating the two through crossover was an offspring with the good topology of one parent and the good sizing of the other, resulting in a substantial improvement in fitness over both its parents. The problem of finding the global optimum in a space with many local optima is also known as the dilemma of exploration vs. exploitation, a classic problem for all systems that can adapt and learn (Holland 1992. p. 69). Once an algorithm (or a human designer) has found a problem-solving strategy that seems to work satisfactorily, should it concentrate on making the best use of that strategy, or should it search for others Abandoning a proven strategy to look for new ones is almost guaranteed to involve losses and degradation of performance, at least in the short term. But if one sticks with a particular strategy to the exclusion of all others, one runs the risk of not discovering better strategies that exist but have not yet been found. Again, genetic algorithms have shown themselves to be very good at striking this balance and discovering good solutions with a reasonable amount of time and computational effort. Another area in which genetic algorithms excel is their ability to manipulate many parameters simultaneously (Forrest 1993. p. 874). Many real-world problems cannot be stated in terms of a single value to be minimized or maximized, but must be expressed in terms of multiple objectives, usually with tradeoffs involved: one can only be improved at the expense of another. GAs are very good at solving such problems: in particular, their use of parallelism enables them to produce multiple equally good solutions to the same problem, possibly with one candidate solution optimizing one parameter and another candidate optimizing a different one (Haupt and Haupt 1998. p.17), and a human overseer can then select one of these candidates to use. If a particular solution to a multiobjective problem optimizes one parameter to a degree such that that parameter cannot be further improved without causing a corresponding decrease in the quality of some other parameter, that solution is called Pareto optimal or non-dominated (Coello 2000. p. 112).Finally, one of the qualities of genetic algorithms which might at first appear to be a liability turns out to be one of their strengths: namely, GAs know nothing about the problems they are deployed to solve. Instead of using previously known domain-specific information to guide each step and making changes with a specific eye towards improvement, as human designers do, they are blind watchmakers (Dawkins 1996 ) they make random changes to their candidate solutions and then use the fitness function to determine whether those changes produce an improvement. The virtue of this technique is that it allows genetic algorithms to start out with an open mind, so to speak. Since its decisions are based on randomness, all possible search pathways are theoretically open to a GA by contrast, any problem-solving strategy that relies on prior knowledge must inevitably begin by ruling out many pathways a priori . therefore missing any novel solutions that may exist there (Koza et al. 1999. p. 547). Lacking preconceptions based on established beliefs of how things should be done or what couldnt possibly work, GAs do not have this problem. Similarly, any technique that relies on prior knowledge will break down when such knowledge is not available, but again, GAs are not adversely affected by ignorance (Goldberg 1989. p. 23). Through their components of parallelism, crossover and mutation, they can range widely over the fitness landscape, exploring regions which intelligently produced algorithms might have overlooked, and potentially uncovering solutions of startling and unexpected creativity that might never have occurred to human designers. One vivid illustration of this is the rediscovery, by genetic programming, of the concept of negative feedback - a principle crucial to many important electronic components today, but one that, when it was first discovered, was denied a patent for nine years because the concept was so contrary to established beliefs (Koza et al. 2003. p. 413). Evolutionary algorithms, of course, are neither aware nor concerned whether a solution runs counter to established beliefs - only whether it works. What are the limitations of GAs Although genetic algorithms have proven to be an efficient and powerful problem-solving strategy, they are not a panacea. GAs do have certain limitations however, it will be shown that all of these can be overcome and none of them bear on the validity of biological evolution. The first, and most important, consideration in creating a genetic algorithm is defining a representation for the problem. The language used to specify candidate solutions must be robust i. e. it must be able to tolerate random changes such that fatal errors or nonsense do not consistently result. There are two main ways of achieving this. The first, which is used by most genetic algorithms, is to define individuals as lists of numbers - binary-valued, integer-valued, or real-valued - where each number represents some aspect of a candidate solution. If the individuals are binary strings, 0 or 1 could stand for the absence or presence of a given feature. If they are lists of numbers, these numbers could represent many different things: the weights of the links in a neural network, the order of the cities visited in a given tour, the spatial placement of electronic components, the values fed into a controller, the torsion angles of peptide bonds in a protein, and so on. Mutation then entails changing these numbers, flipping bits or adding or subtracting random values. In this case, the actual program code does not change the code is what manages the simulation and keeps track of the individuals, evaluating their fitness and perhaps ensuring that only values realistic and possible for the given problem result. In another method, genetic programming, the actual program code does change. As discussed in the section Methods of representation. GP represents individuals as executable trees of code that can be mutated by changing or swapping subtrees. Both of these methods produce representations that are robust against mutation and can represent many different kinds of problems, and as discussed in the section Some specific examples. both have had considerable success. This issue of representing candidate solutions in a robust way does not arise in nature, because the method of representation used by evolution, namely the genetic code, is inherently robust: with only a very few exceptions, such as a string of stop codons, there is no such thing as a sequence of DNA bases that cannot be translated into a protein. Therefore, virtually any change to an individuals genes will still produce an intelligible result, and so mutations in evolution have a higher chance of producing an improvement. This is in contrast to human-created languages such as English, where the number of meaningful words is small compared to the total number of ways one can combine letters of the alphabet, and therefore random changes to an English sentence are likely to produce nonsense. The problem of how to write the fitness function must be carefully considered so that higher fitness is attainable and actually does equate to a better solution for the given problem. If the fitness function is chosen poorly or defined imprecisely, the genetic algorithm may be unable to find a solution to the problem, or may end up solving the wrong problem. (This latter situation is sometimes described as the tendency of a GA to cheat, although in reality all that is happening is that the GA is doing what it was told to do, not what its creators intended it to do.) An example of this can be found in Graham-Rowe 2002. in which researchers used an evolutionary algorithm in conjunction with a reprogrammable hardware array, setting up the fitness function to reward the evolving circuit for outputting an oscillating signal. At the end of the experiment, an oscillating signal was indeed being produced - but instead of the circuit itself acting as an oscillator, as the researchers had intended, they discovered that it had become a radio receiver that was picking up and relaying an oscillating signal from a nearby piece of electronic equipmentThis is not a problem in nature, however. In the laboratory of biological evolution there is only one fitness function, which is the same for all living things - the drive to survive and reproduce, no matter what adaptations make this possible. Those organisms which reproduce more abundantly compared to their competitors are more fit those which fail to reproduce are unfit. In addition to making a good choice of fitness function, the other parameters of a GA - the size of the population, the rate of mutation and crossover, the type and strength of selection - must be also chosen with care. If the population size is too small, the genetic algorithm may not explore enough of the solution space to consistently find good solutions. If the rate of genetic change is too high or the selection scheme is chosen poorly, beneficial schema may be disrupted and the population may enter error catastrophe, changing too fast for selection to ever bring about convergence. Living things do face similar difficulties, and evolution has dealt with them. It is true that if a population size falls too low, mutation rates are too high, or the selection pressure is too strong (such a situation might be caused by drastic environmental change), then the species may go extinct. The solution has been the evolution of evolvability - adaptations that alter a species ability to adapt. For example, most living things have evolved elaborate molecular machinery that checks for and corrects errors during the process of DNA replication, keeping their mutation rate down to acceptably low levels conversely, in times of severe environmental stress, some bacterial species enter a state of hypermutation where the rate of DNA replication errors rises sharply, increasing the chance that a compensating mutation will be discovered. Of course, not all catastrophes can be evaded, but the enormous diversity and highly complex adaptations of living things today show that, in general, evolution is a successful strategy. Likewise, the diverse applications of and impressive results produced by genetic algorithms show them to be a powerful and worthwhile field of study. One type of problem that genetic algorithms have difficulty dealing with are problems with deceptive fitness functions (Mitchell 1996. p.125), those where the locations of improved points give misleading information about where the global optimum is likely to be found. For example, imagine a problem where the search space consisted of all eight-character binary strings, and the fitness of an individual was directly proportional to the number of 1s in it - i. e. 00000001 would be less fit than 00000011, which would be less fit than 00000111, and so on - with two exceptions: the string 11111111 turned out to have very low fitness, and the string 00000000 turned out to have very high fitness. In such a problem, a GA (as well as most other algorithms) would be no more likely to find the global optimum than random search. The resolution to this problem is the same for both genetic algorithms and biological evolution: evolution is not a process that has to find the single global optimum every time. It can do almost as well by reaching the top of a high local optimum, and for most situations, this will suffice, even if the global optimum cannot easily be reached from that point. Evolution is very much a satisficer - an algorithm that delivers a good enough solution, though not necessarily the best possible solution, given a reasonable amount of time and effort invested in the search. The Evidence for Jury-Rigged Design in Nature FAQ gives examples of this very outcome appearing in nature. (It is also worth noting that few, if any, real-world problems are as fully deceptive as the somewhat contrived example given above. Usually, the location of local improvements gives at least some information about the location of the global optimum.) One well-known problem that can occur with a GA is known as premature convergence . If an individual that is more fit than most of its competitors emerges early on in the course of the run, it may reproduce so abundantly that it drives down the populations diversity too soon, leading the algorithm to converge on the local optimum that that individual represents rather than searching the fitness landscape thoroughly enough to find the global optimum (Forrest 1993. p. 876 Mitchell 1996. p. 167). This is an especially common problem in small populations, where even chance variations in reproduction rate may cause one genotype to become dominant over others. The most common methods implemented by GA researchers to deal with this problem all involve controlling the strength of selection, so as not to give excessively fit individuals too great of an advantage. Rank, scaling and tournament selection. discussed earlier, are three major means for accomplishing this some methods of scaling selection include sigma scaling, in which reproduction is based on a statistical comparison to the populations average fitness, and Boltzmann selection, in which the strength of selection increases over the course of a run in a manner similar to the temperature variable in simulated annealing (Mitchell 1996. p. 168). Premature convergence does occur in nature (where it is called genetic drift by biologists). This should not be surprising as discussed above, evolution as a problem-solving strategy is under no obligation to find the single best solution, merely one that is good enough. However, premature convergence in nature is less common since most beneficial mutations in living things produce only small, incremental fitness improvements mutations that produce such a large fitness gain as to give their possessors dramatic reproductive advantage are rare. Finally, several researchers (Holland 1992. p.72 Forrest 1993. p.875 Haupt and Haupt 1998. p.18) advise against using genetic algorithms on analytically solvable problems. It is not that genetic algorithms cannot find good solutions to such problems it is merely that traditional analytic methods take much less time and computational effort than GAs and, unlike GAs, are usually mathematically guaranteed to deliver the one exact solution. Of course, since there is no such thing as a mathematically perfect solution to any problem of biological adaptation, this issue does not arise in nature. Some specific examples of GAs As the power of evolution gains increasingly widespread recognition, genetic algorithms have been used to tackle a broad variety of problems in an extremely diverse array of fields, clearly showing their power and their potential. This section will discuss some of the more noteworthy uses to which they have been put. Porto, Fogel and Fogel 1995 used evolutionary programming to train neural networks to distinguish between sonar reflections from different types of objects: man-made metal spheres, sea mounts, fish and plant life, and random background noise. After 500 generations, the best evolved neural network had a probability of correct classification ranging between 94 and 98 and a probability of misclassification between 7.4 and 1.5, which are reasonable probabilities of detection and false alarm (p.21). The evolved network matched the performance of another network developed by simulated annealing and consistently outperformed networks trained by back propagation, which repeatedly stalled at suboptimal weight sets that did not yield satisfactory results (p.21). By contrast, both stochastic methods showed themselves able to overcome these local optima and produce smaller, effective and more robust networks but the authors suggest that the evolutionary algorithm, unlike simulated annealing, operates on a population and so takes advantage of global information about the search space, potentially leading to better performance in the long run. Tang et al. 1996 survey the uses of genetic algorithms within the field of acoustics and signal processing. One area of particular interest involves the use of GAs to design Active Noise Control (ANC) systems, which cancel out undesired sound by producing sound waves that destructively interfere with the unwanted noise. This is a multiple-objective problem requiring the precise placement and control of multiple loudspeakers GAs have been used both to design the controllers and find the optimal placement of the loudspeakers for such systems, resulting in the effective attenuation of noise (p.33) in experimental tests. Aerospace engineering Obayashi et al. 2000 used a multiple-objective genetic algorithm to design the wing shape for a supersonic aircraft. Three major considerations govern the wings configuration - minimizing aerodynamic drag at supersonic cruising speeds, minimizing drag at subsonic speeds, and minimizing aerodynamic load (the bending force on the wing). These objectives are mutually exclusive, and optimizing them all simultaneously requires tradeoffs to be made. The chromosome in this problem is a string of 66 real-valued numbers, each of which corresponds to a specific aspect of the wing: its shape, its thickness, its twist, and so on. Evolution with elitist rank selection was simulated for 70 generations, with a population size of 64 individuals. At the termination of this process, there were several Pareto-optimal individuals, each one representing a single non-dominated solution to the problem. The paper notes that these best-of-run individuals have physically reasonable characteristics, indicating the validity of the optimization technique (p.186). To further evaluate the quality of the solutions, six of the best were compared to a supersonic wing design produced by the SST Design Team of Japans National Aerospace Laboratory. All six were competitive, having drag and load values approximately equal to or less than the human-designed wing one of the evolved solutions in particular outperformed the NALs design in all three objectives. The authors note that the GAs solutions are similar to a design called the arrow wing which was first suggested in the late 1950s, but ultimately abandoned in favor of the more conventional delta-wing design. In a follow-up paper (Sasaki et al. 2001 ), the authors repeat their experiment while adding a fourth objective, namely minimizing the twisting moment of the wing (a known potential problem for arrow-wing SST designs). Additional control points for thickness are also added to the array of design variables. After 75 generations of evolution, two of the best Pareto-optimal solutions were again compared to the Japanese National Aerospace Laboratorys wing design for the NEXST-1 experimental supersonic airplane. It was found that both of these designs (as well as one optimal design from the previous simulation, discussed above) were physically reasonable and superior to the NALs design in all four objectives. Williams, Crossley and Lang 2001 applied genetic algorithms to the task of spacing satellite orbits to minimize coverage blackouts. As telecommunications technology continues to improve, humans are increasingly dependent on Earth-orbiting satellites to perform many vital functions, and one of the problems engineers face is designing their orbital trajectories. Satellites in high Earth orbit, around 22,000 miles up, can see large sections of the planet at once and be in constant contact with ground stations, but these are far more expensive to launch and more vulnerable to cosmic radiation. It is more economical to put satellites in low orbits, as low as a few hundred miles in some cases, but because of the curvature of the Earth it is inevitable that these satellites will at times lose line-of-sight access to surface receivers and thus be useless. Even constellations of several satellites experience unavoidable blackouts and losses of coverage for this reason. The challenge is to arrange the satellites orbits to minimize this downtime. This is a multi-objective problem, involving the minimization of both the average blackout time for all locations and the maximum blackout time for any one location in practice, these goals turn out to be mutually exclusive. When the GA was applied to this problem, the evolved results for three, four and five-satellite constellations were unusual, highly asymmetric orbit configurations, with the satellites spaced by alternating large and small gaps rather than equal-sized gaps as conventional techniques would produce. However, this solution significantly reduced both average and maximum revisit times, in some cases by up to 90 minutes. In a news article about the results, Dr. William Crossley noted that engineers with years of aerospace experience were surprised by the higher performance offered by the unconventional design. Keane and Brown 1996 used a GA to evolve a new design for a load-bearing truss or boom that could be assembled in orbit and used for satellites, space stations and other aerospace construction projects. The result, a twisted, organic-looking structure that has been compared to a human leg bone, uses no more material than the standard truss design but is lightweight, strong and far superior at damping out damaging vibrations, as was confirmed by real-world tests of the final product. And yet No intelligence made the designs. They just evolved (Petit 1998 ). The authors of the paper further note that their GA only ran for 10 generations due to the computationally intensive nature of the simulation, and the population had not become stagnant. Continuing the run for more generations would undoubtedly have produced further improvements in performance. Figure 4: A genetically optimized three-dimensional truss with improved frequency response. (Adapted from 1 .) Finally, as reported in Gibbs 1996. Lockheed Martin has used a genetic algorithm to evolve a series of maneuvers to shift a spacecraft from one orientation to another within 2 of the theoretical minimum time for such maneuvers. The evolved solution was 10 faster than a solution hand-crafted by an expert for the same problem. Astronomy and astrophysics Charbonneau 1995 suggests the usefulness of GAs for problems in astrophysics by applying them to three example problems: fitting the rotation curve of a galaxy based on observed rotational velocities of its components, determining the pulsation period of a variable star based on time-series data, and solving for the critical parameters in a magnetohydrodynamic model of the solar wind. All three of these are hard multi-dimensional nonlinear problems. Charbonneaus genetic algorithm, PIKAIA, uses generational, fitness-proportionate ranking selection coupled with elitism, ensuring that the single best individual is copied over once into the next generation without modification. PIKAIA has a crossover rate of 0.65 and a variable mutation rate that is set to 0.003 initially and gradually increases later on, as the population approaches convergence, to maintain variability in the gene pool. In the galactic rotation-curve problem, the GA produced two curves, both of which were very good fits to the data (a common result in this type of problem, in which there is little contrast between neighboring hilltops) further observations can then distinguish which one is to be preferred. In the time-series problem, the GA was impressively successful in autonomously generating a high-quality fit for the data, but harder problems were not fitted as well (although, Charbonneau points out, these problems are equally difficult to solve with conventional techniques). The paper suggests that a hybrid GA employing both artificial evolution and standard analytic techniques might perform better. Finally, in solving for the six critical parameters of the solar wind, the GA successfully determined the value of three of them to an accuracy of within 0.1 and the remaining three to accuracies of within 1 to 10. (Though lower experimental error for these three would always be preferable, Charbonneau notes that there are no other robust, efficient methods for experimentally solving a six-dimensional nonlinear problem of this type a conjugate gradient method works as long as a very good starting guess can be provided (p.323). By contrast, GAs do not require such finely tuned domain-specific knowledge.) Based on the results obtained so far, Charbonneau suggests that GAs can and should find use in other difficult problems in astrophysics, in particular inverse problems such as Doppler imaging and helioseismic inversions. In closing, Charbonneau argues that GAs are a strong and promising contender (p.324) in this field, one that can be expected to complement rather than replace traditional optimization techniques, and concludes that the bottom line, if there is to be one, is that genetic algorithms work . and often frightfully well (p.325). Chemistry High-powered, ultrashort pulses of laser energy can split apart complex molecules into simpler molecules, a process with important applications to organic chemistry and microelectronics. The specific end products of such a reaction can be controlled by modulating the phase of the laser pulse. However, for large molecules, solving for the desired pulse shape analytically is too difficult: the calculations are too complex and the relevant characteristics (the potential energy surfaces of the molecules) are not known precisely enough. Assion et al. 1998 solved this problem by using an evolutionary algorithm to design the pulse shape. Instead of inputting complex, problem-specific knowledge about the quantum characteristics of the input molecules to design the pulse to specifications, the EA fires a pulse, measures the proportions of the resulting product molecules, randomly mutates the beam characteristics with the hope of getting these proportions closer to the desired output, and the process repeats. (Rather than fine-tune any characteristics of the laser beam directly, the authors GA represents individuals as a set of 128 numbers, each of which is a voltage value that controls the refractive index of one of the pixels in the laser light modulator. Again, no problem-specific knowledge about the properties of either the laser or the reaction products is needed.) The authors state that their algorithm, when applied to two sample substances, automatically finds the best configuration. no matter how complicated the molecular response may be (p.920), demonstrating automated coherent control on products that are chemically different from each other and from the parent molecule (p.921). In the early to mid-1990s, the widespread adoption of a novel drug design technique called combinatorial chemistry revolutionized the pharmaceutical industry. In this method, rather than the painstaking, precise synthesis of a single compound at a time, biochemists deliberately mix a wide variety of reactants to produce an even wider variety of products - hundreds, thousands or millions of different compounds per batch - which can then be rapidly screened for biochemical activity. In designing libraries of reactants for this technique, there are two main approaches: reactant-based design, which chooses optimized groups of reactants without considering what products will result, and product-based design, which selects reactants most likely to produce products with the desired properties. Product-based design is more difficult and complex, but has been shown to result in better and more diverse combinatorial libraries and a greater likelihood of getting a usable result. In a paper funded by GlaxoSmithKlines research and development department, Gillet 2002 discusses the use of a multiobjective genetic algorithm for the product-based design of combinatorial libraries. In choosing the compounds that go into a particular library, qualities such as molecular diversity and weight, cost of supplies, toxicity, absorption, distribution, and metabolism must all be considered. If the aim is to find molecules similar to an existing molecule of known function (a common method of new drug design), structural similarity can also be taken into account. This paper presents a multiobjective approach where a set of Pareto-optimal results that maximize or minimize each of these objectives can be developed. The author concludes that the GA was able to simultaneously satisfy the criteria of molecular diversity and maximum synthetic efficiency, and was able to find molecules that were drug-like as well as very similar to given target molecules after exploring a very small fraction of the total search space (p.378). In a related paper, Glen and Payne 1995 discuss the use of genetic algorithms to automatically design new molecules from scratch to fit a given set of specifications. Given an initial population either generated randomly or using the simple molecule ethane as a seed, the GA randomly adds, removes and alters atoms and molecular fragments with the aim of generating molecules that fit the given constraints. The GA can simultaneously optimize a large number of objectives, including molecular weight, molecular volume, number of bonds, number of chiral centers, number of atoms, number of rotatable bonds, polarizability, dipole moment, and more in order to produce candidate molecules with the desired properties. Based on experimental tests, including one difficult optimization problem that involved generating molecules with properties similar to ribose (a sugar compound frequently mimicked in antiviral drugs), the authors conclude that the GA is an excellent idea generator (p.199) that offers fast and powerful optimisation properties and can generate a diverse set of possible structures (p.182). They go on to state, Of particular note is the powerful optimising ability of the genetic algorithm, even with relatively small population sizes (p.200). In a sign that these results are not merely theoretical, Lemley 2001 reports that the Unilever corporation has used genetic algorithms to design new antimicrobial compounds for use in cleansers, which it has patented. Electrical engineering A field-programmable gate array, or FPGA for short, is a special type of circuit board with an array of logic cells, each of which can act as any type of logic gate, connected by flexible interlinks which can connect cells. Both of these functions are controlled by software, so merely by loading a special program into the board, it can be altered on the fly to perform the functions of any one of a vast variety of hardware devices. Dr. Adrian Thompson has exploited this device, in conjunction with the principles of evolution, to produce a prototype voice-recognition circuit that can distinguish between and respond to spoken commands using only 37 logic gates - a task that would have been considered impossible for any human engineer. He generated random bit strings of 0s and 1s and used them as configurations for the FPGA, selecting the fittest individuals from each generation, reproducing and randomly mutating them, swapping sections of their code and passing them on to another round of selection. His goal was to evolve a device that could at first discriminate between tones of different frequencies (1 and 10 kilohertz), then distinguish between the spoken words go and stop. This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems - a clock. How does it work Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way - yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997 ). Altshuler and Linden 1997 used a genetic algorithm to evolve wire antennas with pre-specified properties. The authors note that the design of such antennas is an imprecise process, starting with the desired properties and then determining the antennas shape through guesses. intuition, experience, approximate equations or empirical studies (p.50). This technique is time-consuming, often does not produce optimal results, and tends to work well only for relatively simple, symmetric designs. By contrast, in the genetic algorithm approach, the engineer specifies the antennas electromagnetic properties, and the GA automatically synthesizes a matching configuration. Altshuler and Linden used their GA to design a circularly polarized seven-segment antenna with hemispherical coverage the result is shown to the left. Each individual in the GA consisted of a binary chromosome specifying the three-dimensional coordinates of each end of each wire. Fitness was evaluated by simulating each candidate according to an electromagnetic wiring code, and the best-of-run individual was then built and tested. The authors describe the shape of this antenna, which does not resemble traditional antennas and has no obvious symmetry, as unusually weird and counter-intuitive (p.52), yet it had a nearly uniform radiation pattern with high bandwidth both in simulation and in experimental testing, excellently matching the prior specification. The authors conclude that a genetic algorithm-based method for antenna design shows remarkable promise. . this new design procedure is capable of finding genetic antennas able to effectively solve difficult antenna problems, and it will be particularly useful in situations where existing designs are not adequate (p.52). Financial markets Mahfoud and Mani 1996 used a genetic algorithm to predict the future performance of 1600 publicly traded stocks. Specifically, the GA was tasked with forecasting the relative return of each stock, defined as that stocks return minus the average return of all 1600 stocks over the time period in question, 12 weeks (one calendar quarter) into the future. As input, the GA was given historical data about each stock in the form of a list of 15 attributes, such as price-to-earnings ratio and growth rate, measured at various past points in time the GA was asked to evolve a set of ifthen rules to classify each stock and to provide, as output, both a recommendation on what to do with regards to that stock (buy, sell, or no prediction) and a numerical forecast of the relative return. The GAs results were compared to those of an established neural net-based system which the authors had been using to forecast stock prices and manage portfolios for three years previously. Of course, the stock market is an extremely noisy and nonlinear system, and no predictive mechanism can be correct 100 of the time the challenge is to find a predictor that is accurate more often than not. In the experiment, the GA and the neural net each made forecasts at the end of the week for each one of the 1600 stocks, for twelve consecutive weeks. Twelve weeks after each prediction, the actual performance was compared with the predicted relative return. Overall, the GA significantly outperformed the neural network: in one trial run, the GA correctly predicted the direction of one stock 47.6 of the time, made no prediction 45.8 of the time, and made an incorrect prediction only 6.6 of the time, for an overall predictive accuracy of 87.8. Although the neural network made definite predictions more often, it was also wrong in its predictions more often (in fact, the authors speculate that the GAs greater ability to make no prediction when the data were uncertain was a factor in its success the neural net always produces a prediction unless explicitly restricted by the programmer). In the 1600-stock experiment, the GA produced a relative return of 5.47, versus 4.40 for the neural net - a statistically significant difference. In fact, the GA also significantly outperformed three major stock market indices - the SampP 500, the SampP 400, and the Russell 2000 - over this period chance was excluded as the cause of this result at the 95 confidence level. The authors attribute this compelling success to the ability of the genetic algorithm to learn nonlinear relationships not readily apparent to human observers, as well as the fact that it lacks a human experts a priori bias against counterintuitive or contrarian rules (p.562). Similar success was achieved by Andreou, Georgopoulos and Likothanassis 2002. who used hybrid genetic algorithms to evolve neural networks that predicted the exchange rates of foreign currencies up to one month ahead. As opposed to the last example, where GAs and neural nets were in competition, here the two worked in concert, with the GA evolving the architecture (number of input units, number of hidden units, and the arrangement of the links between them) of the network which was then trained by a filter algorithm. As historical information, the algorithm was given 1300 previous raw daily values of five currencies - the American dollar, the German deutsche mark, the French franc, the British pound, and the Greek drachma - and asked to predict their future values 1, 2, 5, and 20 days ahead. The hybrid GAs performance, in general, showed a remarkable level of accuracy (p.200) in all cases tested, outperforming several other methods including neural networks alone. Correlations for the one-day case ranged from 92 to 99, and though accuracy decreased over increasingly greater time lags, the GA continued to be quite successful (p.206) and clearly outperformed the other methods. The authors conclude that remarkable prediction success has been achieved in both a one-step ahead and a multistep predicting horizon (p.208) - in fact, they state that their results are better by far than any related predictive strategies attempted on this data series or other currencies. The uses of GAs on the financial markets have begun to spread into real-world brokerage firms. Naik 1996 reports that LBS Capital Management, an American firm headquartered in Florida, uses genetic algorithms to pick stocks for a pension fund it manages. Coale 1997 and Begley and Beals 1995 report that First Quadrant, an investment firm in California that manages over 2.2 billion, uses GAs to make investment decisions for all of their financial services. Their evolved model earns, on average, 255 for every 100 invested over six years, as opposed to 205 for other types of modeling systems. Game playing One of the most novel and compelling demonstrations of the power of genetic algorithms was presented by Chellapilla and Fogel 2001. who used a GA to evolve neural networks that could play the game of checkers. The authors state that one of the major difficulties in these sorts of strategy-related problems is the credit assignment problem - in other words, how does one write a fitness function It has been widely believed that the mere criterion of win, lose or draw does not provide sufficient information for an evolutionary algorithm to figure out what constitutes good play. In this paper, Chellapilla and Fogel overturn that assumption. Given only the spatial positions of pieces on the checkerboard and the total number of pieces possessed by each side, they were able to evolve a checkers program that plays at a level competitive with human experts, without any intelligent input as to what constitutes good play - indeed, the individuals in the evolutionary algorithm were not even told what the criteria for a win were, nor were they told the result of any one game. In Chellapilla and Fogels representation, the game state was represented by a numeric list of 32 elements, with each position in the list corresponding to an available position on the board. The value at each position was either 0 for an unoccupied square, -1 if that square was occupied by an enemy checker, 1 if that square was occupied by one of the programs checkers, and - K or K for a square occupied by an enemy or friendly king. (The value of K was not pre-specified, but again was determined by evolution over the course of the algorithm.) Accompanying this was a neural network with multiple processing layers and one input layer with a node for each of the possible 4x4, 5x5, 6x6, 7x7 and 8x8 squares on the board. The output of the neural net for any given arrangement of pieces was a value from -1 to 1 indicating how good it felt that position was for it. For each move, the neural network was presented with a game tree listing all possible moves up to four turns into the future, and a move decision was made based on which branch of the tree produced the best results for it. The evolutionary algorithm began with a population of 15 neural networks with randomly generated weights and biases assigned to each node and link each individual then reproduced once, generating an offspring with variations in the values of the network. These 30 individuals then competed for survival by playing against each other, with each individual competing against 5 randomly chosen opponents per turn. 1 point was awarded for each win and 2 points were deducted for each loss. The 15 best performers, based on total score, were selected to produce offspring for the next generation, and the process repeated. Evolution was continued for 840 generations (approximately six months of computer time). The single best individual that emerged from this selection was entered as a competitor on the gaming website zone. Over a period of two months, it played against 165 human opponents comprising a range of high skill levels, from class C to master, according to the ranking system of the United States Chess Federation (shown at left, some ranks omitted for clarity). Of these games, the neural net won 94, lost 39 and drew 32 based on the rankings of the opponents in these games, the evolved neural net was equivalent to a player with a mean rating of 2045.85, placing it at the expert level - a higher ranking than 99.61 of over 80,000 players registered at the website. One of the neural nets most significant victories was when it defeated a player ranked 98th out of all registered players, whose rating was just 27 points below master level. Tests conducted with a simple piece-differential program (which bases moves solely on the difference between the number of checkers remaining to each side) with an eight-move look-ahead showed the neural net to be significantly superior, with a rating over 400 points higher. A program that relies only on the piece count and an eight-ply search will defeat a lot of people, but it is not an expert. The best evolved neural network is (p.425). Even when it was searching positions two further moves ahead than the neural net, the piece-differential program lost decisively in eight out of ten games. This conclusively demonstrates that the evolved neural net is not merely counting pieces, but is somehow processing spatial characteristics of the board to decide its moves. The authors point out that opponents on zone often commented that the neural nets moves were strange, but its overall level of play was described as very tough or with similar complimentary terms. To further test the evolved neural network (which the authors named Anaconda since it often won by restricting its opponents mobility), it was played against a commercial checkers program, Hoyles Classic Games, distributed by Sierra Online (Chellapilla and Fogel 2000 ). This program comes with a variety of built-in characters, each of whom plays at a different skill level. Anaconda was tested against three characters (Beatrice, Natasha and Leopold) designated as expert-level players, playing one game as red and one game as white against each of them with a six-ply look-ahead. Though the authors doubted that this depth of look-ahead would give Anaconda the ability to play at the expert skill level it had previously shown, it won six straight victories out of all six games played. Based on this outcome, the authors expressed skepticism over whether the Hoyle software played at the skill level advertised, though it should be noted that they reached this conclusion based solely on the ease with which Anaconda defeated it The ultimate test of Anaconda was given in Chellapilla and Fogel 2002. where the evolved neural net was matched against the best checkers player in the world: Chinook. a program designed principally by Dr. Jonathan Schaeffer of the University of Alberta. Rated at 2814 in 1996 (with its closest human competitors rated in the 2600s), Chinook incorporates a book of opening moves provided by human grandmasters, a sophisticated set of middle-game algorithms, and a complete database of all possible moves with ten pieces on the board or less, so it never makes a mistake in the endgame. An enormous amount of human intelligence and expertise went into the design of this program. Chellapilla and Fogel pitted Anaconda against Chinook in a 10-game tournament, with Chinook playing at a 5-ply skill level, making it roughly approximate to master level. Chinook won this contest, four wins to two with four draws. (Interestingly, the authors note, in two of the games that ended as draws, Anaconda held the lead with four kings to Chinooks three. Furthermore, one of Chinooks wins came from a 10-ply series of movies drawn from its endgame database, which Anaconda with an 8-ply look-ahead could not have anticipated. If Anaconda had had access to an endgame database of the same quality as Chinooks, the outcome of the tournament might well have been a victory for Anaconda, four wins to three.) These results provide good support for the expert-level rating that Anaconda earned on zone (p.76), with an overall rating of 2030-2055, comparable to the 2045 rating it earned by playing against humans. While Anaconda is not an invulnerable player, it is able to play competitively at the expert level and hold its own against a variety of extremely skilled human checkers players. When one considers the very simple fitness criterion under which these results were obtained, the emergence of Anaconda provides dramatic corroboration of the power of evolution. Geophysics Sambridge and Gallagher 1993 used a genetic algorithm to locate earthquake hypocenters based on seismological data. (The hypocenter is the point beneath the Earths surface at which an earthquake begins. The epicenter is the point on the surface directly above the hypocenter.) This is an exceedingly complex task, since the properties of seismic waves depend on the properties of the rock layers through which they travel. The traditional method for locating the hypocenter relies upon what is known as a seismic inversion algorithm, which starts with a best guess of the location, calculates the derivatives of wave travel time with respect to source position, and performs a matrix operation to provide an updated location. This process is repeated until an acceptable solution is reached. (This Post of the Month. from November 2003, provides more information.) However, this method requires derivative information and is prone to becoming trapped on local optima. A location algorithm that does not depend on derivative information or velocity models can avoid these shortfalls by calculating only the forward problem - the difference between observed and predicted wave arrival times for different hypocenter locations. However, an exhaustive search based on this method would be far too computationally expensive. This, of course, is precisely the type of optimization problem at which genetic algorithms excel. Like all GAs, the one proposed by the cited paper is parallel in nature - rather than progressively perturbing a single hypocenter closer and closer to the solution, it begins with a cloud of potential hypocenters which shrinks over time to converge on a single solution. The authors state that their approach can rapidly locate near optimal solutions without an exhaustive search of the parameter space (p.1467), displays highly organized behavior resulting in efficient search and is a compromise between the efficiency of derivative based methods and the robustness of a fully nonlinear exhaustive search (p.1469). The authors conclude that their genetic algorithm is efficient for truly global optimization (p.1488) and a powerful new tool for performing robust hypocenter location (p.1489). Materials engineering Giro, Cyrillo and Galvatildeo 2002 used genetic algorithms to design electrically conductive carbon-based polymers known as polyanilines. These polymers, a recently invented class of synthetic materials, have large technological potential applications and may open up windows onto new fundamental physical phenomena (p.170). However, due to their high reactivity, carbon atoms can form a virtually infinite number of structures, making a systematic search for new molecules with interesting properties all but impossible. In this paper, the authors apply a GA-based approach to the task of designing new molecules with pre-specified properties, starting with a randomly generated population of initial candidates. They conclude that their methodology can be a very effective tool (p.174) to guide experimentalists in the search for new compounds and is general enough to be extended to the design of novel materials belonging to virtually any class of molecules. Weismann, Hammel and Baumlck 1998 applied evolutionary algorithms to a nontrivial (p.162) industrial problem: the design of multilayer optical coatings used for filters that reflect, transmit or absorb light of specified frequencies. These coatings are used in the manufacture of sunglasses, for example, or compact discs. Their manufacture is a precise task: the layers must be laid down in a particular sequence and particular thicknesses to produce the desired result, and uncontrollable environmental variations in the manufacturing environment such as temperature, pollution and humidity may affect the performance of the finished product. Many local optima are not robust against such variations, meaning that maximum product quality must be paid for with higher rates of undesirable deviation. The particular problem considered in this paper also had multiple criteria: in addition to the reflectance, the spectral composition (color) of the reflected light was also considered. The EA operated by varying the number of coating layers and the thickness of each, and produced designs that were substantially more robust to parameter variation (p.166) and had higher average performance than traditional methods. The authors conclude that evolutionary algorithms can compete with or even outperform traditional methods (p.167) of multilayer optical coating design, without having to incorporate domain-specific knowledge into the search function and without having to seed the population with good initial designs. One more use of GAs in the field of materials engineering merits mention: Robin et al. 2003 used GAs to design exposure patterns for an electron lithography beam, used to etch submicrometer-scale structures onto integrated circuits. Designing these patterns is a highly difficult task it is cumbersome and wasteful to determine them experimentally, but the high dimensionality of the search space defeats most search algorithms. As many as 100 parameters must be optimized simultaneously to control the electron beam and prevent scattering and proximity effects that would otherwise ruin the fine structures being sculpted. The forward problem - determining the resulting structure as a function of the electron dose - is straightforward and easy to simulate, but the inverse problem of determining the electron dose to produce a given structure, which is what is being solved here, is far harder and no deterministic solution exists. Genetic algorithms, which are known to be able to find good solutions to very complex problems of high dimensionality (p.75) without needing to be supplied with domain-specific information on the topology of the search landscape, were applied successfully to this problem. The papers authors employed a steady-state GA with roulette-wheel selection in a computer simulation, which yielded very good optimized (p.77) exposure patterns. By contrast, a type of hill-climber known as a simplex-downhill algorithm was applied to the same problem, without success the SD method quickly became trapped in local optima which it could not escape, yielding solutions of poor quality. A hybrid approach of the GA and SD methods also could not improve on the results delivered by the GA alone. Mathematics and algorithmics Although some of the most promising applications and compelling demonstrations of GAs power are in the field of engineering design, they are also relevant to pure mathematical problems. Haupt and Haupt 1998 (p.140) discuss the use of GAs to solve high-order nonlinear partial differential equations, typically by finding the values for which the equations equal zero, and give as an example a near-perfect GA solution for the coefficients of the fifth-order Super Korteweg-de Vries equation. Sorting a list of items into order is an important task in computer science, and a sorting network is an efficient way to accomplish this. A sorting network is a fixed list of comparisons performed on a set of a given size in each comparison, two elements are compared and exchanged if not in order. Koza et al. 1999. p. 952 used genetic programming to evolve minimal sorting networks for 7-item sets (16 comparisons), 8-item sets (19 comparisons), and 9-item sets (25 comparisons). Mitchell 1996. p.21, discusses the use of genetic algorithms by W. Daniel Hillis to find a 61-comparison sorting network for a 16-item set, only one step more than the smallest known. This latter example is particularly interesting for two innovations it used: diploid chromosomes, and more notably, host-parasite coevolution. Both the sorting networks and the test cases evolved alongside each other sorting networks were given higher fitness based on how many test cases they sorted correctly, while test cases were given higher fitness based on how many sorting networks they could trick into sorting incorrectly. The GA with coevolution performed significantly better than the same GA without it. One final, noteworthy example of GAs in the field of algorithmics can be found in Koza et al. 1999. who used genetic programming to discover a rule for the majority classification problem in one-dimensional cellular automata that is better than all known rules written by humans. A one-dimensional cellular automaton can be thought of as a finite tape with a given number of positions (cells) on it, each of which can hold either the state 0 or the state 1. The automaton runs for a given number of time steps at each step, every cell acquires a new value based on its previous value and the value of its nearest neighbors. (The Game of Life is a two-dimensional cellular automaton.) The majority classification problem entails finding a table of rules such that, if more than half the cells on the tape are 1 initially, all the cells go to 1 otherwise all the cells go to 0. The challenge lies in the fact that any individual cell can only access information about its nearest neighbors therefore, good rule sets must somehow find a way to transmit information about distant regions of the tape. It is known that a perfect solution to this problem does not exist - no rule set can accurately classify all possible initial configurations - but over the past twenty years, there has been a long succession of increasingly better solutions. In 1978, three researchers developed the so-called GKL rule, which correctly classifies 81.6 of the possible initial states. In 1993, a better rule with an accuracy of 81.8 was discovered in 1995, another rule with accuracy of 82.178 was found. All of these rules required significant work by intelligent, creative humans to develop. By contrast, the best rule discovered by a run of genetic programming, given in Koza et al. 1999. p.973, has an overall accuracy of 82.326 - better than any of the human-created solutions that have been developed over the last two decades. The authors note that their new rules are qualitatively different from previously published rules, employing fine-grained internal representations of state density and intricate sets of signals for communicating information over long distance. Military and law enforcement Kewley and Embrechts 2002 used genetic algorithms to evolve tactical plans for military battles. The authors note that planning for a tactical military battle is a complex, high-dimensional task which often bedevils experienced professionals (p.163), not only because such decisions are usually made under high-stress conditions, but also because even simple plans require a great number of conflicting variables and outcomes to take into account: minimizing friendly casualties, maximizing enemy casualties, controlling desired terrain, conserving resources, and so on. Human planners have difficulty dealing with the complexities of this task and often must resort to quick and dirty approaches, such as doing whatever worked last time. To overcome these difficulties, the authors of the cited paper developed a genetic algorithm to automate the creation of battle plans, in conjunction with a graphical battle simulator program. The commander enters the preferred outcome, and the GA automatically evolves a battle plan in the simulation used, factors such as the topography of the land, vegetative cover, troop movement speed, and firing accuracy were taken into account. In this experiment, co-evolution was also used to improve the quality of the solutions: battle plans for the enemy forces evolved concurrently with friendly plans, forcing the GA to correct any weaknesses in its own plan that an enemy could exploit. To measure the quality of solutions produced by the GA, they were compared to battle plans for the same scenario produced by a group of experienced military experts. considered to be very capable of developing tactical courses of action for the size forces used in this experiment (p.166). These seasoned experts both developed their own plan and, when the GAs solution was complete, were given a chance to examine it and modify it as they saw fit. Finally, all the sets of plans were run multiple times on the simulator to determine their average quality. The results speak for themselves: the evolved solution outperformed both the military experts own plan and the plan produced by their modifications to the GAs solution. . The plans produced by automated algorithms had a significantly higher mean performance than those generated by experienced military experts (p.161). Furthermore, the authors note that the GAs plan made good tactical sense. (It involved a two-pronged attack on the enemy position by mechanized infantry platoons supported by attack helicopters and ground scouts, in conjunction with unmanned aerial vehicles conducting reconnaissance to direct artillery fire.) In addition, the evolved plan included individual friendly units performing doctrinal missions - an emergent property that appeared during the course of the run, rather than being specified by the experimenter. In increasingly networked modern battlefields, the attractive potential of an evolutionary algorithm that can automate the production of high-quality tactical plans should be obvious. An interesting use of GAs in law enforcement was reported in Naik 1996. which described the FacePrints software, a project to help witnesses identify and describe criminal suspects. The cliched image of the police sketch artist drawing a picture of the suspects face in response to witnesses promptings is a difficult and inefficient method: most people are not good at describing individual aspects of a persons face, such as the size of the nose or shape of the jaw, but instead are better at recognizing whole faces. FacePrints takes advantage of this by using a genetic algorithm that evolves pictures of faces based on databases of hundreds of individual features that can be combined in a vast number of ways. The program shows randomly generated face images to witnesses, who pick the ones that most resemble the person they saw the selected faces are then mutated and bred together to generate new combinations of features, and the process repeats until an accurate portrait of the suspects face emerges. In one real-life robbery case, the final portraits created by the three witnesses were strikingly similar, and the resulting picture was printed in the local paper. Molecular biology In living things, transmembrane proteins are proteins that protrude through a cellular membrane. Transmembrane proteins often perform important functions such as sensing the presence of certain substances outside the cell or transporting them into the cell. Understanding the behavior of a transmembrane protein requires identifying the segment of that protein that is actually embedded within the membrane, which is called the transmembrane domain . Over the last two decades, molecular biologists have published a succession of increasingly accurate algorithms for this purpose. All proteins used by living things are made up of the same 20 amino acids. Some of these amino acids are hydrophobic . meaning they are repelled by water, and some are hydrophilic . meaning they are attracted to water. Amino acid sequences that are part of a transmembrane domain are more likely to be hydrophobic. However, hydrophobicity is not a precisely defined characteristic, and there is no one agreed-upon scale for measuring it. Koza et al. 1999. chapter 16, used genetic programming to design an algorithm to identify transmembrane domains of a protein. Genetic programming was given a set of standard mathematical operators to work with, as well as a set of boolean amino-acid-detecting functions that return 1 if the amino acid at a given position is the amino acid they detect and otherwise return -1. (For example, the A function takes as an argument one number corresponding to a position within the protein, and returns 1 if the amino acid at that position is alanine, which is denoted by the letter A otherwise it returns -1). A single shared memory variable kept a running count of the overall sum, and when the algorithm completed, the protein segment was identified as a transmembrane domain if its value was positive. Given only these tools, would it entail the creation of new information for a human designer to produce an efficient solution to this problem The solutions produced by genetic programming were evaluated for fitness by testing them on 246 protein segments whose transmembrane status was known. The best-of-run individual was then evaluated on 250 additional, out-of-sample, test cases and compared to the performance of the four best known human-written algorithms for the same purpose. The result: Genetic programming produced a transmembrane segment-identifying algorithm with an overall error rate of 1.6 - significantly lower than all four human-written algorithms, the best of which had an error rate of 2.5. The genetically designed algorithm, which the authors dubbed the 0-2-4 rule, operates as follows: Increment the running sum by 4 for each instance of glutamic acid (an electrically charged and highly hydrophilic) amino acid in the protein segment. Increment the running sum by 0 for each instance of alanine, phenylalanine, isoleucine, leucine, methionine, or valine (all highly hydrophobic amino acids) in the protein segment. Increment the running sum by 2 for each instance of all other amino acids. If (SUM - 3.1544)0.9357 is less than the length of the protein segment, classify that segment as a transmembrane domain otherwise, classify it as a nontransmembrane domain. Pattern recognition and data mining Competition in the telecommunications industry today is fierce, and a new term - churn - has been coined to describe the rapid rate at which subscribers switch from one service provider to another. Churn costs telecom carriers a large amount of money each year, and reducing churn is an important factor in increasing profitability. If carriers can contact customers who are likely to switch and offer them special incentives to stay, churn rates can be reduced but no carrier has the resources to contact more than a small percent of its customers. The problem is therefore how to identify customers who are more likely to churn. All carriers have extensive databases of customer information that can theoretically be used for this purpose but what method works best for sifting through this vast amount of data to identify the subtle patterns and trends that signify a customers likelihood of churningAu, Chan and Yao 2003 applied genetic algorithms to this problem to generate a set of if-then rules that predict the churning probability of different groups of customers. In their GA, the first generation of rules, all of which had one condition, was generated using a probabilistic induction technique. Subsequent generations then refine these, combining simple, single-condition rules into more complex, multi-condition rules. The fitness measure used an objective interestingness measure of correlation which requires no subjective input. The evolutionary data-mining algorithm was tested on a real-world database of 100,000 subscribers provided by a Malaysian carrier, and its performance was compared against two alternative methods: a multilayer neural network and a widely used decision-tree-based algorithm, C4.5. The authors state that their EA was able to discover hidden regularities in the database and was able to make accurate churn prediction under different churn rates (p.542), outperforming C4.5 under all circumstances, outperforming the neural network under low monthly churn rates and matching the neural network under larger churn rates, and reaching conclusions more quickly in both cases. Some further advantages of the evolutionary approach are that it can operate efficiently even when some data fields are missing and that it can express its findings in easily understood rule sets, unlike the neural net. Among some of the more interesting rules discovered by the EA are as follows: subscribers are more likely to churn if they are personally subscribed to the service plan and have not been admitted to any bonus scheme (a potential solution is to admit all such subscribers to bonus schemes) subscribers are more likely to churn if they live in Kuala Lumpur, are between 36 and 44 in age, and pay their bills with cash (presumably because it is easier for subscribers who pay cash, rather than those whose accounts are automatically debited, to switch providers) and subscribers living in Penang who signed up through a certain dealer are more likely to churn (this dealer may be providing poor customer service and should be investigated). Rizki, Zmuda and Tamburino 2002 used evolutionary algorithms to evolve a complex pattern recognition system with a wide variety of potential uses. The authors note that the task of pattern recognition is increasingly being performed by machine learning algorithms, evolutionary algorithms in particular. Most such approaches begin with a pool of predefined features, from which an EA can select appropriate combinations for the task at hand by contrast, this approach began from the ground up, first evolving individual feature detectors in the form of expression trees, then evolving cooperative combinations of those detectors to produce a complete pattern recognition system. The evolutionary process automatically selects the number of feature detectors, the complexity of the detectors, and the specific aspects of the data each detector responds to. To test their system, the authors gave it the task of classifying aircraft based on their radar reflections. The same kind of aircraft can return quite different signals depending on the angle and elevation at which it is viewed, and different kinds of aircraft can return very similar signals, so this is a non-trivial task. The evolved pattern recognition system correctly classified 97.2 of the targets, a higher net percentage than any of the three other techniques - a perceptron neural network, a nearest-neighbor classifier algorithm, and a radial basis algorithm - against which it was tested. (The radial basis networks accuracy was only 0.5 less than the evolved classifier, which is not a statistically significant difference, but the radial basis network required 256 feature detectors while the evolved recognition system used only 17.) As the authors state, The recognition systems that evolve use fewer features than systems formed using conventional techniques, yet achieve comparable or superior recognition accuracy (p.607). Various aspects of their system have also been applied to problems including optical character recognition, industrial inspection and medical image analysis. Hughes and Leyland 2000 also applied multiple-objective GAs to the task of classifying targets based on their radar reflections. High-resolution radar cross section data requires massive amounts of disk storage space, and it is very computationally intensive to produce an actual model of the source from the data. By contrast, the authors GA-based approach proved very successful, producing a model as good as the traditional iterative approach while reducing the computational overhead and storage requirements to the point where it was feasible to generate good models on a desktop computer. By contrast, the traditional iterative approach requires ten times the resolution and 560,000 times as many accesses of image data to produce models of similar quality. The authors conclude that their results clearly demonstrate (p.160) the ability of the GA to process both two - and three-dimensional radar data of any level of resolution with far fewer calculations than traditional methods, while retaining acceptably high accuracy. Robotics The international RoboCup tournament is a project to promote advances in robotics, artificial intelligence, and related fields by providing a standard problem where new technologies can be tried out - specifically, it is an annual soccer tournament between teams of autonomous robots. (The stated goal is to develop a team of humanoid robots that can win against the world-champion human soccer team by 2050 at the present time, most of the competing robot teams are wheeled.) The programs that control the robotic team members must display complex behavior, deciding when to block, when to kick, how to move, when to pass the ball to teammates, how to coordinate defense and offense, and so on. In the simulator league of the 1997 competition, David Andre and Astro Teller entered a team named Darwin United whose control programs had been developed automatically from the ground up by genetic programming, a challenge to the conventional wisdom that this problem is just too difficult for such a technique (Andre and Teller 1999. p. 346).To solve this difficult problem, Andre and Teller provided the genetic programming algorithm with a set of primitive control functions such as turning, moving, kicking, and so on. (These functions were themselves subject to change and refinement during the course of evolution.) Their fitness function, written to reward good play in general rather than scoring specifically, provided a list of increasingly important objectives: getting near the ball, kicking the ball, keeping the ball on the opponents side of the field, moving in the correct direction, scoring goals, and winning the game. It should be noted that no code was provided to teach the team specifically how to achieve these complex objectives. The evolved programs were then evaluated using a hierarchical selection model: first, the candidate teams were tested on an empty field and rejected if they did not score a goal within 30 seconds. Next, they were evaluated against a team of stationary kicking posts that kick the ball toward the opposite side of the field. Thirdly, the team played a game against the winning team from the RoboCup 1997 competition. Finally, teams that scored at least one goal against this team were played off against each other to determine which was best. Out of 34 teams in its division, Darwin United ultimately came in 17th, placing squarely in the middle of the field and outranking half of the human-written entries. While a tournament victory would undoubtedly have been more impressive, this result is competitive and significant in its own right, and appears even more so in the light of history. About 25 years ago, chess-playing computer programs were in their infancy a computer had only recently entered even a regional competition for the first time, although it did not win (Sagan 1979. p.286). But a machine that plays chess in the middle range of human expertise is a very capable machine (ibid.), and it might be said that the same is true of robot soccer. Just as chess-playing machines compete at world grandmaster levels today, what types of systems will genetic programming be producing 20 or 30 years from now Routing and scheduling Burke and Newall 1999 used genetic algorithms to schedule exams among university students. The timetable problem in general is known to be NP-complete, meaning that no method is known to find a guaranteed-optimal solution in a reasonable amount of time. In such a problem, there are both hard constraints - two exams may not be assigned to the same room at the same time - and soft constraints - students should not be assigned to multiple exams in succession, if possible, to minimize fatigue. Hard constraints must be satisfied, while soft constraints should be satisfied as far as possible. The authors dub their hybrid approach for solving this problem a memetic algorithm: an evolutionary algorithm with rank-based, fitness-proportionate selection, combined with a local hill-climber to optimize solutions found by the EA. The EA was applied to data sets from four real universities (the smallest of which had an enrollment of 25,000 students), and its results were compared to results produced by a heuristic backtracking method, a well-established algorithm that is among the best known for this problem and that is used at several real universities. Compared to this method, the EA produced a result with a quite uniform 40 reduction in penalty. He and Mort 2000 applied genetic algorithms to the problem of finding optimal routing paths in telecommunications networks (such as phone networks and the Internet) which are used to relay data from senders to recipients. This is an NP-hard optimization problem, a type of problem for which GAs are extremely well suited. and have found an enormous range of successful applications in such areas (p.42). It is also a multiobjective problem, balancing conflicting objectives such as maximizing data throughput, minimizing transmission delay and data loss, finding low-cost paths, and distributing the load evenly among routers or switches in the network. Any successful real-world algorithm must also be able to re-route around primary paths that fail or become congested. In the authors hybrid GA, a shortest-path-first algorithm, which minimizes the number of hops a given data packet must pass through, is used to generate the seed for the initial population. However, this solution does not take into account link congestion or failure, which are inevitable conditions in real networks, and so the GA takes over, swapping and exchanging sections of paths. When tested on a data set derived from a real Oracle network database, the GA was found to be able to efficiently route around broken or congested links, balancing traffic load and maximizing the total network throughput. The authors state that these results demonstrate the effectiveness and scalability of the GA and show that optimal or near-optimal solutions can be achieved (p.49). This technique has found real-world applications for similar purposes, as reported in Begley and Beals 1995. The telecommunications company U. S. West (now merged with Qwest) was faced with the task of laying a network of fiber-optic cable. Until recently, the problem of designing the network to minimize the total length of cable laid was solved by an experienced engineer now the company uses a genetic algorithm to perform the task automatically. The results: Design time for new networks has fallen from two months to two days and saves US West 1 million to 10 million each (p.70). Jensen 2003 and Chryssolouris and Subramaniam 2001 applied genetic algorithms to the task of generating schedules for job shops. This is an NP-hard optimization problem with multiple criteria: factors such as cost, tardiness, and throughput must all be taken into account, and job schedules may have to be rearranged on the fly due to machine breakdowns, employee absences, delays in delivery of parts, and other complications, making robustness in a schedule an important consideration. Both papers concluded that GAs are significantly superior to commonly used dispatching rules, producing efficient schedules that can more easily handle delays and breakdowns. These results are not merely theoretical, but have been applied to real-world situations: As reported in Naik 1996. organizers of the 1992 Paralympic Games used a GA to schedule events. As reported in Petzinger 1995. John Deere amp Co. has used GAs to generate schedules for a Moline, Illinois plant that manufactures planters and other heavy agricultural equipment. Like luxury cars, these can be built in a wide variety of configurations with many different parts and options, and the vast number of possible ways to build them made efficient scheduling a seemingly intractable problem. Productivity was hampered by scheduling bottlenecks, worker teams were bickering, and money was being lost. Finally, in 1993, Deere turned to Bill Fulkerson, a staff analyst and engineer who conceived of using a genetic algorithm to produce schedules for the plant. Overcoming initial skepticism, the GA quickly proved itself: monthly output has risen by 50 percent, overtime has nearly vanished, and other Deere plants are incorporating GAs into their own scheduling. As reported in Rao 1998. Volvo has used an evolutionary program called OptiFlex to schedule its million-square-foot factory in Dublin, Virginia, a task that requires handling hundreds of constraints and millions of possible permutations for each vehicle. Like all genetic algorithms, OptiFlex works by randomly combining different scheduling possibilities and variables, determines their fitness by ranking them according to costs, benefits and constraints, then causes the best solutions to swap genes and sends them back into the population for another trial. Until recently, this daunting task was handled by a human engineer who took up to four days to produce the schedule for each week now, thanks to GAs, this task can be completed in one day with minimal human intervention. As reported in Lemley 2001. United Distillers and Vintners, a Scottish company that is the largest and most profitable spirits distributor in the world and accounts for over one-third of global grain whiskey production, uses a genetic algorithm to manage its inventory and supply. This is a daunting task, requiring the efficient storage and distribution of over 7 million barrels containing 60 distinct recipes among a vast system of warehouses and distilleries, depending on a multitude of factors such as age, malt number, wood type and market conditions. Previously, coordinating this complex flow of supply and demand required five full-time employees. Today, a few keystrokes on a computer instruct a genetic algorithm to generate a new schedule each week, and warehouse efficiency has nearly doubled. Beasley, Sonander and Havelock 2001 used a GA to schedule airport landings at London Heathrow, the United Kingdoms busiest airport. This is a multiobjective problem that involves, among other things, minimizing delays and maximizing number of flights while maintaining adequate separation distances between planes (air vortices that form in a planes wake can be dangerous to another flying too closely behind). When compared to actual schedules from a busy period at the airport, the GA was able to reduce average wait time by 2-5, equating to one to three extra flights taking off and landing per hour - a significant improvement. However, even greater improvements have been achieved: as reported in Wired 2002. major international airports and airlines such as Heathrow, Toronto, Sydney, Las Vegas, San Francisco, America West Airlines, AeroMexico, and Delta Airlines are using genetic algorithms to schedule takeoffs, landings, maintenance and other tasks, in the form of Ascent Technologys SmartAirport Operations Center software (see ascentfaq. html ). Breeding and mutating solutions in the form of schedules that incorporate thousands of variables, Ascent beats humans hands-down, raising productivity by up to 30 percent at every airport where its been implemented. Systems engineering Benini and Toffolo 2002 applied a genetic algorithm to the multi-objective task of designing wind turbines used to generate electric power. This design is a complex procedure characterized by several trade-off decisions. The decision-making process is very difficult and the design trends are not uniquely established (p.357) as a result, there are a number of different turbine types in existence today and no agreement on which, if any, is optimal. Mutually exclusive objectives such as maximum annual energy production and minimal cost of energy must be taken into account. In this paper, a multi-objective evolutionary algorithm was used to find the best trade-offs between these goals, constructing turbine blades with the optimal configuration of characteristics such as tip speed, hubtip ratio, and chord and twist distribution. In the end, the GA was able to find solutions competitive with commercial designs, as well as more clearly elucidate the margins by which annual energy production can be improved without producing overly expensive designs. Haas, Burnham and Mills 1997 used a multiobjective genetic algorithm to optimize the beam shape, orientation and intensity of X-ray emitters used in targeted radiotherapy to destroy cancerous tumors while sparing healthy tissue. (X-ray photons aimed at a tumor tend to be partially scattered by structures within the body, unintentionally damaging internal organs. The challenge is to minimize this effect while maximizing the radiation dose delivered to the tumor.) Using a rank-based fitness model, the researchers began with the solution produced by the conventional method, an iterative least-squares approach, and then used the GA to modify and improve it. By constructing a model of a human body and exposing it to the beam configuration evolved by the GA, they found good agreement between the predicted and actual distributions of radiation. The authors conclude that their results show a sparing of healthy organs that could not be achieved using conventional techniques (p.1745). Lee and Zak 2002 used a genetic algorithm to evolve a set of rules to control an automotive anti-lock braking system. While the ability of antilock brake systems to reduce stopping distance and improve maneuverability has saved many lives, the performance of an ABS is dependent on road surface conditions: for example, an ABS controller that is optimized for dry asphalt will not work as well on wet or icy roads, and vice versa. In this paper, the authors propose a GA to fine-tune an ABS controller that can identify the road surface properties (by monitoring wheel slip and acceleration) and respond accordingly, delivering the appropriate amount of braking force to maximize the wheels traction. In testing, the genetically tuned ABS exhibits excellent tracking properties (p.206) and was far superior (p.209) to two other methods of braking maneuvers, quickly finding new optimal values for wheel slip when the type of terrain changes beneath a moving car and reducing total stopping distance. The lesson we learned from our experiment. is that a GA can help to fine-tune even a well-designed controller. In our case, we already had a good solution to the problem yet, with the help of a GA, we were able to improve the control strategy significantly. In summary, it seems that it is worthwhile to try to apply a GA, even to a well-designed controller, because there is a good chance that one can find a better set of the controller settings using GAs (p.211). As cited in Schechter 2000. Dr. Peter Senecal of the University of Wisconsin used small-population genetic algorithms to improve the efficiency of diesel engines. These engines work by injecting fuel into a combustion chamber which is filled with extremely compressed and therefore extremely hot air, hot enough to cause the fuel to explode and drive a piston that produces the vehicles motive force. This basic design has changed little since it was invented by Rudolf Diesel in 1893 although vast amounts of effort have been put into making improvements, this is a very difficult task to perform analytically because it requires precise knowledge of the turbulent behavior displayed by the fuel-air mixture and simultaneous variation of many interdependent parameters. Senecals approach, however, eschewed the use of such problem-specific knowledge and instead worked by evolving parameters such as the pressure of the combustion chamber, the timing of the fuel injections and the amount of fuel in each injection. The result: the simulation produced an improved engine that consumed 15 less fuel than a normal diesel engine and produced one-third as much nitric oxide exhaust and half as much soot. Senecals team then built a real diesel engine according to the specifications of the evolved solution and got the same results. Senecal is now moving on to evolving the geometry of the engine itself, hopefully producing even greater improvements. As cited in Begley and Beals 1995. Texas Instruments used a genetic algorithm to optimize the layout of components on a computer chip, placing structures so as to minimize the overall area and create the smallest chip possible. Using a connection strategy that no human had thought of, the GA came up with a design that took 18 less space. Finally, as cited in Ashley 1992. a proprietary software system known as Engineous that employs genetic algorithms is being used by companies in the aerospace, automotive, manufacturing, turbomachinery and electronics industries to design and improve engines, motors, turbines and other industrial devices. In the words of its creator, Dr. Siu Shing Tong, Engineous is a master tweaker, tirelessly trying out scores of what-if scenarios until the best possible design emerges (p.49). In one trial of the system, Engineous was able to produce a 0.92 percent increase in the efficiency of an experimental turbine in only one week, while ten weeks of work by a human designer produced only a 0.5 percent improvement. Granted, Engineous does not rely solely on genetic algorithms it also employs numerical optimization techniques and expert systems which use logical if-then rules to mimic the decision-making process of a human engineer. However, these techniques are heavily dependent on domain-specific knowledge, lack general applicability, and are prone to becoming trapped on local optima. By contrast, the use of genetic algorithms allows Engineous to explore regions of the search space that other methods miss. Engineous has found widespread use in a variety of industries and problems. Most famously, it was used to improve the turbine power plant of the Boeing 777 airliner as reported in Begley and Beals 1995. the genetically optimized design was almost 1 more fuel-efficient than previous engines, which in a field such as this is a windfall. Engineous has also been used to optimize the configuration of industrial DC motors, hydroelectric generators and steam turbines, to plan out power grids, and to design superconducting generators and nuclear power plants for orbiting satellites. Rao 1998 also reports that NASA has used Engineous to optimize the design of a high-altitude airplane for sampling ozone depletion, which must be both light and efficient. Creationist arguments As one might expect, the real-world demonstration of the power of evolution that GAs represent has proven surprising and disconcerting for creationists, who have always claimed that only intelligent design, not random variation and selection, could have produced the information content and complexity of living things. They have therefore argued that the success of genetic algorithms does not allow us to infer anything about biological evolution. The criticisms of two anti-evolutionists, representing two different viewpoints, will be addressed: young-earth creationist Dr. Don Batten of Answers in Genesis, who has written an article entitled Genetic algorithms -- do they show that evolution works , and old-earth creationist and intelligent-design advocate Dr. William Dembski, whose recent book No Free Lunch (Dembski 2002 ) discusses this topic. Some traits in living things are qualitative, whereas GAs are always quantitative Batten states that GAs must be quantitative, so that any improvement can be selected for. This is true. He then goes on to say, Many biological traits are qualitative--it either works or it does not, so there is no step-wise means of getting from no function to the function. This assertion has not been demonstrated, however, and is not supported by evidence. Batten does not even attempt to give an example of a biological trait that either works or does not and thus cannot be built up in a stepwise fashion. But even if he did offer such a trait, how could he possibly prove that there is no stepwise path to it Even if we do not know of such a path, does it follow that there is none Of course not. Batten is effectively claiming that if we do not understand how certain traits evolved, then it is impossible for those traits to have evolved - a classic example of the elementary logical fallacy of argument from ignorance . The search space of all possible variants of any given biological trait is enormous, and in most cases our knowledge subsumes only an infinitesimal fraction of the possibilities. There may well be numerous paths to a structure which we do not yet know about there is no reason whatsoever to believe our current ignorance sets limits on our future progress. Indeed, history gives us reason to be confident: scientists have made enormous progress in explaining the evolution of many complex biological structures and systems, both macroscopic and microscopic (for example, see these pages on the evolution of complex molecular systems. clock genes. the woodpeckers tongue or the bombardier beetle ). We are justified in believing it likely that the ones that have so far eluded us will also be made clear in the future. In fact, GAs themselves give us an excellent reason to believe this. Many of the problems to which they have been applied are complex engineering and design issues where the solution was not known ahead of time and therefore the problem could not be rigged to aid the algorithms success. If the creationists were correct, it would have been entirely reasonable to expect genetic algorithms to fail dismally time after time when applied to these problems, but instead, just the opposite has occurred: GAs have discovered powerful, high-quality solutions to difficult problems in a diverse variety of fields. This calls into serious question whether there even are any problems such as Batten describes, whose solutions are inaccessible to an evolutionary process. GAs select for one trait at a time, whereas living things are multidimensional Batten states that in GAs, A single trait is selected for, whereas any living thing is multidimensional, and asserts that in living things with hundreds of traits, selection has to operate on all traits that affect survival, whereas a GA will not work with three or four different objectives, or I dare say even just two. This argument reveals Battens profound ignorance of the relevant literature. Even a cursory survey of the work done on evolutionary algorithms (or a look at an earlier section of this essay ) would have revealed that multiobjective genetic algorithms are a major, thriving area of research within the broader field of evolutionary computation and prevented him from making such an embarrassingly incorrect claim. There are journal articles, entire issues of prominent journals on evolutionary computation, entire conferences, and entire books on the topic of multiobjective GAs. Coello 2000 provides a very extensive survey, with five pages of references to papers on the use of multiobjective genetic algorithms in a broad range of fields see also Fleming and Purshouse 2002 Hanne 2000 Zitzler and Thiele 1999 Fonseca and Fleming 1995 Srinivas and Deb 1994 Goldberg 1989. p.197. For some books and papers discussing the use of multiobjective GAs to solve specific problems, see: Obayashi et al. 2000 Sasaki et al. 2001 Benini and Toffolo 2002 Haas, Burnham and Mills 1997 Chryssolouris and Subramaniam 2001 Hughes and Leyland 2000 He and Mort 2000 Kewley and Embrechts 2002 Beasley, Sonander and Havelock 2001 Sato et al. 2002 Tang et al. 1996 Williams, Crossley and Lang 2001 Koza et al. 1999 Koza et al. 2003. For a comprehensive repository of citations on multiobjective GAs, see lania. mx GAs do not allow the possibility of extinction or error catastrophe Batten claims that, in GAs, Something always survives to carry on the process, while this is not necessarily true in the real world - in short, GAs do not allow the possibility of extinction. However, this is not true extinction can occur. For example, some GAs use a model of selection called thresholding . in which individuals must have a fitness higher than some predetermined level to survive and reproduce (Haupt and Haupt 1998. p. 37). If no individual meets this standard in such a GA, the population can indeed go extinct. But even in GAs that do not use thresholding, states analogous to extinction can occur. If mutation rates are too high or selective pressures too strong, then a GA will never find a feasible solution. The population may become hopelessly scrambled as deleterious mutations building up faster than selection can remove them disrupt fitter candidates (error catastrophe), or it may thrash around helplessly, unable to achieve any gain in fitness large enough to be selected for. Just as in nature, there must be a balance or a solution will never be reached. The one advantage a programmer has in this respect is that, if this does happen, he can reload the program with different values - for population size, for mutation rate, for selection pressure - and start over again. Obviously this is not an option for living things. Batten says, There is no rule in evolution that says that some organism(s) in the evolving population will remain viable no matter what mutations occur, but there is no such rule in genetic algorithms either. Batten also states that the GAs that I have looked at artificially preserve the best of the previous generation and protect it from mutations or recombination in case nothing better is produced in the next iteration. This criticism will be addressed in the next point. GAs ignore the cost of substitution Battens next claim is that GAs neglect Haldanes Dilemma , which states that an allele which contributes less to an organisms fitness will take a correspondingly longer time to become fixated in a population. Obviously, what he is referring to is the elitist selection technique, which automatically selects the best candidate at each generation no matter how small its advantage over its competitors is. He is right to suggest that, in nature, very slight competitive advantages might take much longer to propagate. Genetic algorithms are not an exact model of biological evolution in this respect. However, this is beside the point. Elitist selection is an idealization of biological evolution - a model of what would happen in nature if chance did not intervene from time to time. As Batten acknowledges, Haldanes dilemma does not state that a slightly advantageous mutation will never become fixed in a population it states that it will take more time for it to do so. However, when computation time is at a premium or a GA researcher wishes to obtain a solution more quickly, it may be desirable to skip this process by implementing elitism. An important point is that elitism does not affect which mutations arise, merely makes certain the selection of the best ones that do arise. It would not matter what the strength of selection was if information-increasing mutations did not occur. In other words, elitism speeds up convergence once a good solution has been discovered - it does not bring about an outcome which would not otherwise have occurred. Therefore, if genetic algorithms with elitism can produce new information, then so can evolution in the wild. Furthermore, not all GAs use elitist selection. Many do not, instead relying only on roulette-wheel selection and other stochastic sampling techniques, and yet these have been no less successful. For instance, Koza et al. 2003. p.8-9, gives examples of 36 instances where genetic programming has produced human-competitive results, including the automated recreation of 21 previously patented inventions (six of which were patented during or after 2000), 10 of which duplicate the functionality of the patent in a new way, and also including two patentable new inventions and five new algorithms that outperform any human-written algorithms for the same purpose. As Dr. Koza states in an earlier reference to the same work (1999. p.1070): The elitist strategy is not used. Some other papers cited in this essay in which elitism is not used include: Robin et al. 2003 Rizki, Zmuda and Tamburino 2002 Chryssolouris and Subramaniam 2001 Burke and Newall 1999 Glen and Payne 1995 Au, Chan and Yao 2003 Jensen 2003 Kewley and Embrechts 2002 Williams, Crossley and Lang 2001 Mahfoud and Mani 1996. In each of these cases, without any mechanism to ensure that the best individuals were selected at each generation, without exempting these individuals from potentially detrimental random change, genetic algorithms still produce powerful, efficient, human-competitive results. This fact may be surprising to creationists such as Batten, but it is wholly expected by advocates of evolution. GAs ignore generation time constraints This criticism is puzzling. Batten claims that a single generation in a GA can take microseconds, whereas a single generation in any living organism can take anywhere from minutes to years. This is true, but how it is supposed to bear on the validity of GAs as evidence for evolution is not explained. If a GA can generate new information, regardless of how long it takes, then surely evolution in the wild can do so as well that GAs can indeed do so is all this essay intends to demonstrate. The only remaining issue would then be whether biological evolution has actually had enough time to cause significant change, and the answer to this question would be one for biologists, geologists and physicists, not computer programmers. The answer these scientists have provided is fully in accord with evolutionary timescales, however. Numerous lines of independent evidence, including radiometric isochron dating. the cooling rates of white dwarfs. the nonexistence of isotopes with short halflives in nature. the recession rates of distant galaxies. and analysis of the cosmic microwave background radiation all converge on the same conclusion: an Earth and a universe many billions of years old, easily long enough for evolution to produce all the diversity of life we see today by all reasonable estimates. GAs employ unrealistically high rates of mutation and reproduction Batten asserts, without providing any supporting evidence or citations, that GAs commonly produce 100s or 1000s of offspring per generation, a rate even bacteria, the fastest-reproducing biological organisms, cannot match. This criticism misses the mark in several ways. First of all, if the metric being used is (as it should be) number of offspring per generation, rather than number of offspring per unit of absolute time, then there clearly are biological organisms that can reproduce at rates faster than that of bacteria and roughly equal to the rates Batten claims are unrealistic. For example, a single frog can lay thousands of eggs at a time, each of which has the potential to develop into an adult. Granted, most of these usually will not survive due to resource limitations and predation, but then most of the offspring in each generation of a GA will not go on either. Secondly, and more importantly, a genetic algorithm working on solving a problem is not meant to represent a single organism. Instead, a genetic algorithm is more analogous to an entire population of organisms - after all, it is populations, not individuals, that evolve. Of course, it is eminently plausible for a whole population to collectively have hundreds or thousands of offspring per generation. (Creationist Walter ReMine makes this same mistake with regards to Dr. Richard Dawkins weasel program. See this Post of the Month for more.) Additionally, Batten says, the mutation rate is artificially high in GAs, whereas living organisms have error-checking machinery designed to limit the mutation rate to about 1 in 10 billion base pairs (though this is too small - the actual figure is closer to 1 in 1 billion. See Dawkins 1996. p.124). Now of course this is true. If GAs mutated at this rate, they would take far too long to solve real-world problems. Clearly, what should be considered relevant is the rate of mutation relative to the size of the genome . The mutation rate should be high enough to promote a sufficient amount of diversity in the population without overwhelming the individuals. An average human will possess between one and five mutations this is not at all unrealistic for the offspring in a GA. GAs have artificially small genomes Battens argument that the genome of a genetic algorithm is artificially small and only does one thing is badly misguided. In the first place, as we have seen, it is not true that a GA only does one thing there are many examples of genetic algorithms specifically designed to optimize many parameters simultaneously, often far more parameters simultaneously than a human designer ever could. And how exactly does Batten quantify artificially small Many evolutionary algorithms, such as John Kozas genetic programming, use variable-length encodings where the size of candidate solutions can grow arbitrarily large. Batten claims that even the simplest living organism has far more information in its genome than a GA has ever produced, but while organisms living today may have relatively large genomes, that is because much complexity has been gained over the course of billions of years of evolution. As the Probability of Abiogenesis article points out, there is good reason to believe that the earliest living organisms were very much simpler than any species currently extant - self-replicating molecules probably no longer than 30 or 40 subunits, which could easily be specified by the 1800 bits of information that Batten apparently concedes at least one GA has generated. Genetic algorithms are similarly a very new technique whose full potential has not yet been tapped digital computers themselves are only a few decades old, and as Koza (2003. p. 25) points out, evolutionary computing techniques have been generating increasingly more substantial and complex results over the last 15 years, in synchrony with the ongoing rapid increase in computing power often referred to as Moores Law . Just as early life was very simple compared to what came after, todays genetic algorithms, despite the impressive results they have already produced, are likely to give rise to far greater things in the future. GAs ignore the possibility of mutation occurring throughout the genome Batten apparently does not understand how genetic algorithms work, and he shows it by making this argument. He states that in real life, mutations occur throughout the genome, not just in a gene or section that specifies a given trait. This is true, but when he says that the same is not true of GAs, he is wrong. Exactly like in living organisms, GAs permit mutation and recombination to occur anywhere in the genomes of their candidate solutions exactly like in living organisms, GAs must weed out the deleterious changes while simultaneously selecting for the beneficial ones. Batten goes on to claim that the program itself is protected from mutations only target sequences are mutated, and if the program itself were mutated it would soon crash. This criticism, however, is irrelevant. There is no reason why the governing program of a GA should be mutated. The program is not part of the genetic algorithm the program is what supervises the genetic algorithm and mutates the candidate solutions, which are what the programmer is seeking to improve. The program running the GA is not analogous to the reproductive machinery of an organism, a comparison which Batten tries to make. Rather, it is analogous to the invariant natural laws that govern the environments in which living organisms live and reproduce, and these are neither expected to change nor need to be protected from it. GAs ignore problems of irreducible complexity Using old-earth creationist Michael Behes argument of irreducible complexity. Batten argues, Many biological traits require many different components to be present, functioning together, for the trait to exist at all, whereas this does not happen in GAs. However, it is trivial to show that such a claim is false, as genetic algorithms have produced irreducibly complex systems. For example, the voice-recognition circuit Dr. Adrian Thompson evolved (Davidson 1997 ) is composed of 37 core logic gates. Five of them are not even connected to the rest of the circuit, yet all 37 are required for the circuit to work if any of them are disconnected from their power supply, the entire system ceases to function. This fits Behes definition of an irreducibly complex system and shows that an evolutionary process can produce such things. It should be noted that this is the same argument as the first one, merely presented in different language, and thus the refutation is the same. Irreducible complexity is not a problem for evolution, whether that evolution is occurring in living beings in the wild or in silicon on a computers processor chip. GAs ignore polygeny, pleiotropy, and other genetic complexity Batten argues that GAs ignore issues of polygeny (the determination of one trait by multiple genes), pleiotropy (one gene affecting multiple traits), and dominant and recessive genes. However, none of these claims are true. GAs do not ignore polygeny and pleiotropy: these properties are merely allowed to arise naturally rather than being deliberately coded in. It is obvious that in any complex interdependent system (i. e. a nonlinear system ), the alteration or removal of one part will cause a ripple effect of changes throughout thus GAs naturally incorporate polygeny and pleiotropy. In the genetic algorithm literature, parameter interaction is called epistasis (a biological term for gene interaction). When there is little to no epistasis, minimum seeking algorithms i. e. hill-climbers --A. M. perform best. Genetic algorithms shine when the epistasis is medium to high. (Haupt and Haupt 1998. p. 31, original emphasis). Likewise, there are some genetic algorithm implementations that do have diploid chromosomes and dominant and recessive genes (Goldberg 1989. p.150 Mitchell 1996. p.22). However, those that do not are simply more like haploid organisms, such as bacteria, than they are like diploid organisms, such as human beings. Since (by certain measures) bacteria are among the most successful organisms on this planet, such GAs remain a good model of evolution. GAs do not have multiple reading frames Batten discusses the existence of multiple reading frames in the genomes of some living things, in which the DNA sequences code for different functional proteins when read in different directions or with different starting offsets. He asserts that Creating a GA to generate such information-dense coding would seem to be out of the question. Such a challenge begs for an answer, and here it is: Soule and Ball 2001. In this paper, the authors present a genetic algorithm with multiple reading frames and dense coding, enabling it to store more information than the total length of its genome. Like the three-nucleotide codons that specify amino acids in the genomes of living organisms, this GAs codons were five-digit binary strings. Since the codons were five digits long, there were five different possible reading frames. The sequence 11111 serves as a start codon and 00000 as a stop codon because the start and stop codons could occur anywhere in the genome, the length of each individual was variable. Regions of the chromosome which did not fall between start-stop pairs were ignored. The GA was tested on four classic function maximization problems. Initially, the majority of the bits do not participate in any gene, i. e. most of a chromosome is non-coding. Again this is because in the initial random individuals there are relatively few start-stop codon pairs. However, the number of bits that do not participate decreases extremely rapidly. During the course of the run, the GA can increase the effective length of its genome by introducing new start codons in different reading frames. By the end of the run, the amount of overlap is quite high. Many bits are participating in several (and often all five) genes. On all test problems, the GA started, on average, with 5 variables specified by the end of the run, that number had increased to an average of around 25. In the test problems, the GA with multiple reading frames produced significantly better solutions than a standard GA on two out of the four problems and better average solutions on the remaining two. In one problem, the GA successfully compressed 625 total bits of information into a chromosome only 250 bits long by using alternative reading frames. The authors label this behavior extremely sophisticated and conclude that These data show that a GA can successfully use reading frames despite the added complexity and It is clear that a GA can introduce new genes as necessary to solve a given problem, even with the difficulties imposed by using start and stop codons and overlapping genes. GAs have preordained goals Like several others, this objection shows that Batten does not fully understand what a genetic algorithm is and how it works. He argues that GAs, unlike evolution, have goals predetermined and specified at the outset, and as an example of this, offers Dr. Richard Dawkins weasel program. However, the weasel program is not a true genetic algorithm, and is not typical of genetic algorithms, for precisely that reason. It was not intended to demonstrate the problem-solving power of evolution. Instead, its only intent was to show the difference between single-step selection (the infamous tornado blowing through a junkyard producing a 747) and cumulative, multi-step selection. It did have a specific goal predetermined at the outset. True genetic algorithms, however, do not. In a broadly general sense, GAs do have a goal: namely, to find an acceptable solution to a given problem. In this same sense, evolution also has a goal: to produce organisms that are better adapted to their environment and thus experience greater reproductive success. But just as evolution is a process without specific goals, GAs do not specify at the outset how a given problem should be solved. The fitness function is merely set up to evaluate how well a candidate solution performs, without specifying any particular way it should work and without passing judgment on whatever way it does invent. The solution itself then emerges through a process of mutation and selection. Battens next statement shows clearly that he does not understand what a genetic algorithm is. He asserts that Perhaps if the programmer could come up with a program that allowed anything to happen and then measured the survivability of the organisms, it might be getting closer to what evolution is supposed to do - but that is exactly how genetic algorithms work. They randomly generate candidate solutions and randomly mutate them over many generations. No configuration is specified in advance as Batten puts it, anything is allowed to happen. As John Koza (2003. p. 37) writes, uncannily echoing Battens words: An important feature. is that the selection in genetic programming is not greedy. Individuals that are known to be inferior will be selected to a certain degree. The best individual in the population is not guaranteed to be selected. Moreover, the worst individual in the population will not necessarily be excluded. Anything can happen and nothing is guaranteed. (An earlier section discussed this very point as one of a GAs strengths.) And yet, by applying a selective filter to these randomly mutating candidates, efficient, complex and powerful solutions to difficult problems arise, solutions that were not designed by any intelligence and that can often equal or outperform solutions that were designed by humans. Battens blithe assertion that Of course that is impossible is squarely contradicted by reality. GAs do not actually generate new information Battens final criticism runs: With a particular GA, we need to ask how much of the information generated by the program is actually specified in the program, rather than being generated de novo. He charges that GAs often do nothing more than find the best way for certain modules to interact when both the modules themselves and the ways they can interact are specified ahead of time. It is difficult to know what to make of this argument. Any imaginable problem - terms in a calculus equation, molecules in a cell, components of an engine, stocks on a financial market - can be expressed in terms of modules that interact in given ways. If all one has is unspecified modules that interact in unspecified ways, there is no problem to be solved. Does this mean that the solution to no problem requires the generation of new information In regards to Battens criticism about information contained in the solution being prespecified in the problem, the best way to assuage his concerns is to point out the many examples in which GAs begin with randomly generated initial populations that are not in any way designed to help the GA solve the problem. Some such examples include: Graham-Rowe 2004 Davidson 1997 Assion et al. 1998 Giro, Cyrillo and Galvatildeo 2002 Glen and Payne 1995 Chryssolouris and Subramaniam 2001 Williams, Crossley and Lang 2001 Robin et al. 2003 Andreou, Georgopoulos and Likothanassis 2002 Kewley and Embrechts 2002 Rizki, Zmuda and Tamburino 2002 and especially Koza et al. 1999 and Koza et al. 2003. which discuss the use of genetic programming to generate 36 human-competitive inventions in analog circuit design, molecular biology, algorithmics, industrial controller design, and other fields, all starting from populations of randomly generated initial candidates. Granted, some GAs do begin with intelligently generated solutions which they then seek to improve, but this is irrelevant: in such cases the aim is not just to return the initially input solution, but to improve it by the production of new information. In any case, even if the initial situation is as Batten describes, finding the most efficient way a number of modules can interact under a given set of constraints can be a far from trivial task, and one whose solution involves a considerable amount of new information: scheduling at international airports, for example, or factory assembly lines, or distributing casks among warehouses and distilleries. Again, GAs have proven themselves effective at solving problems whose complexity would swamp any human. In light of the multiple innovations and unexpectedly effective solutions arising from GAs in many fields, Battens claim that The amount of new information generated (by a GA) is usually quite trivial rings hollow indeed. Old-earth creationist Dr. William Dembskis recent book, No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. is largely devoted to the topic of evolutionary algorithms and how they relate to biological evolution. In particular, Dembskis book is concerned with an elusive quality he calls specified complexity, which he asserts is contained in abundance in living things, and which he further asserts evolutionary processes are incapable of generating, leaving design through unspecified mechanisms by an unidentified intelligent designer the only alternative. To bolster his case, Dembski appeals to a class of mathematical theorems known as the No Free Lunch theorems, which he claims prove that evolutionary algorithms, on the average, do no better than blind search. Richard Wein has written an excellent and comprehensive rebuttal to Dembski, entitled Not a Free Lunch But a Box of Chocolates. and its points will not be reproduced here. I will instead focus on chapter 4 of Dembskis book, which deals in detail with genetic algorithms. Dembski has one main argument against GAs, which is developed at length throughout this chapter. While he does not deny that they can produce impressive results - indeed, he says that there is something oddly compelling and almost magical (p.221) about the way GAs can find solutions that are unlike anything designed by human beings - he argues that their success is due to the specified complexity that is smuggled into them by their human designers and subsequently embodied in the solutions they produce. In other words, all the specified complexity we get out of an evolutionary algorithm has first to be put into its construction and into the information that guides the algorithm. Evolutionary algorithms therefore do not generate or create specified complexity, but merely harness already existing specified complexity (p.207). The first problem evident in Dembskis argument is this. Although his chapter on evolutionary algorithms runs for approximately 50 pages, the first 30 of those pages discuss nothing but Dr. Richard Dawkins weasel algorithm, which, as already discussed. is not a true genetic algorithm and is not representative of genetic algorithms. Dembskis other two examples - the crooked wire genetic antennas of Edward Altshuler and Derek Linden and the checkers-playing neural nets of Kumar Chellapilla and David Fogel - are only introduced within the last 10 pages of the chapter and are discussed for three pages, combined. This is a serious deficiency, considering that the weasel program is not representative of most work being done in the field of evolutionary computation nevertheless, Dembskis arguments relating to it will be analyzed. In regard to the weasel program, Dembski states that Dawkins and fellow Darwinists use this example to illustrate the power of evolutionary algorithms (p.182), and, again, Darwinists. are quite taken with the METHINKS IT IS LIKE A WEASEL example and see it as illustrating the power of evolutionary algorithms to generate specified complexity (p.183). This is a straw man of Dembskis creation (not least because Dawkins book was written long before Dembski ever coined that term). Here is what Dawkins really says about the purpose of his program: What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection . about a million million million million million years. (Dawkins 1996. p.49, emphasis original) In other words, the weasel program was intended to demonstrate the difference between two different kinds of selection . single-step selection, where a complex result is produced by pure chance in a single leap, and cumulative selection, where a complex result is built up bit by bit via a filtering process that preferentially preserves improvements. It was never intended to be a simulation of evolution as a whole. Single-step selection is the absurdly improbable process frequently attacked in creationist literature by comparing it to a tornado blowing through a junkyard producing a 747 airliner, or an explosion in a print shop producing a dictionary. Cumulative selection is what evolution actually uses. Using single-step selection to achieve a functional result of any significant complexity, one would have to wait, on average, many times the current age of the universe. Using cumulative selection, however, that same result can be reached in a comparatively very short length of time. Demonstrating this difference was the point of Dawkins weasel program, and that was the only point of that program. In a footnote to this chapter, Dembski writes, It is remarkable how Dawkins example gets recycled without any indication of the fundamental difficulties that attend it (p.230), but it is only misconceptions in the minds of creationists such as Dembski and Batten, who attack the weasel program for not demonstrating something it was never intended to demonstrate, that give rise to these difficulties. Unlike every example of evolutionary algorithms discussed in this essay, the weasel program does indeed have a single, prespecified outcome, and the quality of the solutions it generates is judged by explicitly comparing them to that prespecified outcome. Therefore, Dembski is quite correct when he says that the weasel program does not generate new information. However, he then makes a gigantic and completely unjustified leap when he extrapolates this conclusion to all evolutionary algorithms: As the sole possibility that Dawkins evolutionary algorithm can attain, the target sequence in fact has minimal complexity. Evolutionary algorithms are therefore incapable of generating true complexity (p.182). Even Dembski seems to recognize this when he writes: . most evolutionary algorithms in the literature are programmed to search a space of possible solutions to a problem until they find an answer - not, as Dawkins does here, by explicitly programming the answer into them in advance (p.182). But then, having given a perfectly good reason why the weasel program is not representative of GAs as a whole, he inexplicably goes on to make precisely that fallacious generalization In reality, the weasel program is significantly different from most genetic algorithms, and therefore Dembskis argument from analogy does not hold up. True evolutionary algorithms, such as the examples discussed in this essay, do not simply find their way back to solutions already discovered by other methods - instead, they are presented with problems where the optimal solution is not known in advance, and are asked to discover that solution on their own. Indeed, if genetic algorithms could do nothing more than rediscover solutions already programmed into them, what would be the point of using them It would be an exercise in redundancy to do so. However, the widespread scientific (and commercial) interest in GAs shows that there is far more substance to them than the rather trivial example Dembski tries to reduce this entire field to. Having set up and then knocked down this straw man, Dembski moves on to his next line of argument: that the specified complexity exhibited by the outcomes of more representative evolutionary algorithms has, like the weasel program, been smuggled in by the designers of the algorithm. But invariably we find that when specified complexity seems to be generated for free, it has in fact been front-loaded, smuggled in, or hidden from view (p.204). Dembski suggests that the most common hiding place of specified complexity is in the GAs fitness function. What the evolutionary algorithm has done is take advantage of the specified complexity inherent in the fitness function and used it in searching for and then locating the target. (p.194). Dembski goes on to argue that, before an EA can search a given fitness landscape for a solution, some mechanism must first be employed to select that fitness landscape from what he calls a phase space of all the possible fitness landscapes, and if that mechanism is likewise an evolutionary one, some other mechanism must first be employed to select its fitness function from an even larger phase space, and so on. Dembski concludes that the only way to stop this infinite regress is through intelligence, which he holds to have some irreducible, mysterious ability to select a fitness function from a given phase space without recourse to higher-order phase spaces. There is only one known generator of specified complexity, and that is intelligence (p.207). Dembski is correct when he writes that the fitness function guides an evolutionary algorithm into the target (p.192). However, he is incorrect in his claim that selecting the right fitness function is a process that requires the generation of even more specified complexity than the EA itself produces. As Koza (1999. p. 39) writes, the fitness function tells an evolutionary algorithm what needs to be done, not how to do it. Unlike the unrepresentative weasel program example, the fitness function of an EA typically does not specify any particular form that the solution should take, and therefore it cannot be said to contribute specified complexity to the evolved solution in any meaningful sense. An example will illustrate the point in greater detail. Dembski claims that in Chellapilla and Fogels checkers experiment, their choice to hold the winning criterion constant from game to game inserted an enormous amount of specified complexity (p.223). It is certainly true that the final product of this process displayed a great deal of specified complexity (however one chooses to define that term). But is it true that the chosen fitness measure contained just as much specified complexity Here is what Chellapilla and Fogel actually say: To appreciate the level of play that has been achieved, it may be useful to consider the following thought experiment. Suppose you are asked to play a game on an eight-by-eight board of squares with alternating colors. There are 12 pieces on each side arranged in a specific manner to begin play. You are told the rules of how the pieces move (i. e. diagonally, forced jumps, kings) and that the piece differential is available as a feature. You are not, however, told whether or not this differential is favorable or unfavorable (there is a version of checkers termed suicide checkers, where the object is to lose as fast as possible) or if it is even valuable information. Most importantly, you are not told the object of the game. You simply make moves and at some point an external observer declares the game over. They do not, however, provide feedback on whether or not you won, lost, or drew. The only data you receive comes after a minimum of five such games and is offered in the form of an overall point score. Thus, you cannot know with certainty which games contributed to the overall result or to what degree. Your challenge is to induce the appropriate moves in each game based only on this coarse level of feedback. (Chellapilla and Fogel 2001. p.427) It exceeds the bounds of the absurd for Dembski to claim that this fitness measure inserted an enormous amount of specified complexity. If a human being who had never heard of checkers was given the same information, and we returned several months later to discover that he had become an internationally ranked checkers expert, should we conclude that specified complexity has been generated Dembski states that to overturn his argument, one must show that finding the information that guides an evolutionary algorithm to a target is substantially easier than finding the target directly through a blind search (p.204). I contend that this is precisely the case. Intuitively, it should not be surprising that the fitness function contains less information than the evolved solution. This is precisely the reason why GAs have found such widespread use: it is easier (requires less information) to write a fitness function that measures how good a solution is, than to design a good solution from scratch. In more informal terms, consider Dembskis two examples, the crooked-wire genetic antenna and the evolved checkers-playing neural network named Anaconda. It requires a great deal of detailed information about the game of checkers to come up with a winning strategy (consider Chinook and its enormous library of endgames). However, it does not require equally detailed information to recognize such a strategy when one sees it: all we need observe is that that strategy consistently defeats its opponents. Similarly, a person who knew nothing about how to design an antenna that radiates evenly over a hemispherical region in a given frequency range could still test such an antenna and verify that it works as intended. In both cases, determining what constitutes high fitness is far easier (requires less information) than figuring out how to achieve high fitness. Granted, even though choosing a fitness function for a given problem requires less information than actually solving the problem defined by that fitness function, it does take some information to specify the fitness function in the first place, and it is a legitimate question to ask where this initial information comes from. Dembski may still ask about the origin of human intelligence that enables us to decide to solve one problem rather than another, or about the origin of the natural laws of the cosmos that make it possible for life to exist and flourish and for evolution to occur. These are valid questions, and Dembski is entitled to wonder about them. However, by this point - seemingly unnoticed by Dembski himself - he has now moved away from his initial argument. He is no longer claiming that evolution cannot happen instead, he is essentially asking why we live in a universe where evolution can happen. In other words, what Dembski does not seem to realize is that the logical conclusion of his argument is theistic evolution. It is fully compatible with a God who (as many Christians, including evolutionary biologist Kenneth Miller, believe) used evolution as his creative tool, and set up the universe in such a way as to make it not just likely, but certain. I will conclude by clearing up some additional, minor misconceptions in chapter 4 of No Free Lunch. For starters, although Dembski, unlike Batten, is clearly aware of the field of multiobjective optimization, he erroneously states that until some form of univalence is achieved, optimization cannot begin (p.186). This essays discussion of multiple-objective genetic algorithms shows the error of this. Perhaps other design techniques have this restriction, but one of the virtues of GAs is precisely that they can make trade-offs and optimize several mutually exclusive objectives simultaneously, and the human overseers can then pick whichever solution best achieves their goals from the final group of Pareto-optimal solutions. No method of combining multiple criteria into one is necessary. Dembski also states that GAs seem less adept at constructing integrated systems that require multiple parts to achieve novel functions (p.237). The many examples detailed in this essay (particularly John Kozas use of genetic programming to engineer complex analog circuits) show this claim to be false as well. Finally, Dembski mentions that INFORMS, the professional organization of the operations research community, pays very little attention to GAs, and this is reason to be skeptical of the techniques general scope and power (p.237). However, just because a particular scientific society is not making widespread use of GAs does not mean that such uses are not widespread elsewhere or in general, and this essay has endeavored to show that this is in fact the case. Evolutionary techniques have found a wide variety of uses in virtually any field of science one would care to name, as well as among many companies in the commercial sector. Here is a partial list: By contrast, given the dearth of scientific discoveries and research stimulated by intelligent design, Dembski is in a poor position to complain about lack of practical application. Intelligent design is a vacuous hypothesis, telling us nothing more than Some designer did something, somehow, at some time, to cause this result. By contrast, this essay has hopefully demonstrated that evolution is a problem-solving strategy rich with practical applications. Conclusion Even creationists find it impossible to deny that the combination of mutation and natural selection can produce adaptation. Nevertheless, they still attempt to justify their rejection of evolution by dividing the evolutionary process into two categories - microevolution and macroevolution - and arguing that only the second is controversial, and that any evolutionary change we observe is only an example of the first. Now, microevolution and macroevolution are terms that have meaning to biologists they are defined, respectively, as evolution below the species level and evolution at or above the species level. But the crucial difference between the way creationists use these terms and the way scientists use them is that scientists recognize that these two are fundamentally the same process with the same mechanisms, merely operating at different scales. Creationists, however, are forced to postulate some type of unbridgeable gap separating the two, in order for them to deny that the processes of change and adaptation we see operating in the present can be extrapolated to produce all the diversity observed in the living world. However, genetic algorithms make this view untenable by demonstrating the fundamental seamlessness of the evolutionary process. Take, for example, a problem that consists of programming a circuit to discriminate between a 1-kilohertz and a 10-kilohertz tone, and respond respectively with steady outputs of 0 and 5 volts. Say we have a candidate solution that can accurately discriminate between the two tones, but its outputs are not quite steady as required they produce small waveforms rather than the requisite unchanging voltage. Presumably, according to the creationist view, to change this circuit from its present state to the perfect solution would be microevolution, a small change within the ability of mutation and selection to produce. But surely, a creationist would argue, to arrive at this same final state from a completely random initial arrangement of components would be macroevolution and beyond the reach of an evolutionary process. However, genetic algorithms were able to accomplish both, evolving the system from a random arrangement to the near-perfect solution and finally to the perfect, optimal solution. At no step of the way did an insoluble difficulty or a gap that could not be bridged turn up. At no point whatsoever was human intervention required to assemble an irreducibly complex core of components (despite the fact that the finished product does contain such a thing) or to guide the evolving system over a difficult peak. The circuit evolved, without any intelligent guidance, from a completely random and non-functional state to a tightly complex, efficient and optimal state. How can this not be a compelling experimental demonstration of the power of evolution It has been said that human cultural evolution has superceded the biological kind - that we as a species have reached a point where we are able to consciously control our society, our environment and even our genes to a sufficient degree to make the evolutionary process irrelevant. It has been said that the cultural whims of our rapidly changing society, rather than the comparatively glacially slow pace of genetic mutation and natural selection, is what determines fitness today. In a sense, this may well be true. But in another sense, nothing could be further from the truth. Evolution is a problem-solving process whose power we are only beginning to understand and exploit despite this, it is already at work all around us, shaping our technology and improving our lives, and in the future, these uses will only multiply. Without a detailed understanding of the evolutionary process, none of the countless advances we owe to genetic algorithms would have been possible. There is a lesson here to those who deny the power of evolution, as well as those who deny that knowledge of it has any practical benefit. As incredible as it may seem, evolution works . As the poet Lord Byron put it: Tis strange but true for truth is always strange, stranger than fiction. References and resources Adaptive Learning: Fly the Brainy Skies. Wired. vol.10, no.3 (March 2002). Available online at wiredwiredarchive10.03everywhere. htmlpg2 . Altshuler, Edward and Derek Linden. Design of a wire antenna using a genetic algorithm. Journal of Electronic Defense. vol.20, no.7, p.50-52 (July 1997). Andre, David and Astro Teller. Evolving team Darwin United. In RoboCup-98: Robot Soccer World Cup II, Minoru Asada and Hiroaki Kitano (eds). Lecture Notes in Computer Science, vol.1604, p.346-352. Springer-Verlag, 1999. See also: Willihnganz, Alexis. Software that writes software. Salon. August 10, 1998. Available online at salontechfeature19990810geneticprogramming . Andreou, Andreas, Efstratios Georgopoulos and Spiridon Likothanassis. Exchange-rates forecasting: A hybrid algorithm based on genetically optimized adaptive neural networks. Computational Economics. vol.20, no.3, p.191-210 (December 2002). Ashley, Steven. Engineous explores the design space. Mechanical Engineering. February 1992, p.49-52. Assion, A. T. Baumert, M. Bergt, T. Brixner, B. Kiefer, V. Seyfried, M. Strehle and G. Gerber. Control of chemical reactions by feedback-optimized phase-shaped femtosecond laser pulses. Science. vol.282, p.919-922 (30 October 1998). Au, Wai-Ho, Keith Chan, and Xin Yao. A novel evolutionary data mining algorithm with applications to churn prediction. IEEE Transactions on Evolutionary Computation. vol.7, no.6, p.532-545 (December 2003). Beasley, J. E. J. Sonander and P. Havelock. Scheduling aircraft landings at London Heathrow using a population heuristic. Journal of the Operational Research Society. vol.52, no.5, p.483-493 (May 2001). Begley, Sharon and Gregory Beals. Software au naturel. Newsweek. May 8, 1995, p.70. Benini, Ernesto and Andrea Toffolo. Optimal design of horizontal-axis wind turbines using blade-element theory and evolutionary computation. Journal of Solar Energy Engineering. vol.124, no.4, p.357-363 (November 2002). Burke, E. K. and J. P. Newall. A multistage evolutionary algorithm for the timetable problem. IEEE Transactions on Evolutionary Computation. vol.3, no.1, p.63-74 (April 1999). Charbonneau, Paul. Genetic algorithms in astronomy and astrophysics. The Astrophysical Journal Supplement Series. vol.101, p.309-334 (December 1995). Chellapilla, Kumar and David Fogel. Evolving an expert checkers playing program without using human expertise. IEEE Transactions on Evolutionary Computation. vol.5, no.4, p.422-428 (August 2001). Available online at natural-selectionNSIPublicationsOnline. htm . Chellapilla, Kumar and David Fogel. Anaconda defeats Hoyle 6-0: a case study competing an evolved checkers program against commercially available software. In Proceedings of the 2000 Congress on Evolutionary Computation. p.857-863. IEEE Press, 2000. Available online at natural-selectionNSIPublicationsOnline. htm . Chellapilla, Kumar and David Fogel. Verifying Anacondas expert rating by competing against Chinook: experiments in co-evolving a neural checkers player. Neurocomputing. vol.42, no.1-4, p.69-86 (January 2002). Chryssolouris, George and Velusamy Subramaniam. Dynamic scheduling of manufacturing job shops using genetic algorithms. Journal of Intelligent Manufacturing. vol.12, no.3, p.281-293 (June 2001). Coello, Carlos. An updated survey of GA-based multiobjective optimization techniques. ACM Computing Surveys. vol.32, no.2, p.109-143 (June 2000). Davidson, Clive. Creatures from primordial silicon. New Scientist. vol.156, no.2108, p.30-35 (November 15, 1997). Available online at newscientisthottopicsaiprimordial. jsp . Dawkins, Richard. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design. W. W. Norton, 1996. Dembski, William. No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. Rowman amp Littlefield, 2002. Fleming, Peter and R. C. Purshouse. Evolutionary algorithms in control systems engineering: a survey. Control Engineering Practice. vol.10, p.1223-1241 (2002). Fonseca, Carlos and Peter Fleming. An overview of evolutionary algorithms in multiobjective optimization. Evolutionary Computation. vol.3, no.1, p.1-16 (1995). Forrest, Stephanie. Genetic algorithms: principles of natural selection applied to computation. Science. vol.261, p.872-878 (1993). Gibbs, W. Wayt. Programming with primordial ooze. Scientific American. October 1996, p.48-50. Gillet, Valerie. Reactant - and product-based approaches to the design of combinatorial libraries. Journal of Computer-Aided Molecular Design. vol.16, p.371-380 (2002). Giro, R. M. Cyrillo and D. S. Galvatildeo. Designing conducting polymers using genetic algorithms. Chemical Physics Letters. vol.366, no.1-2, p.170-175 (November 25, 2002). Glen, R. C. and A. W.R. Payne. A genetic algorithm for the automated generation of molecules within constraints. Journal of Computer-Aided Molecular Design. vol.9, p.181-202 (1995). Goldberg, David. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989. Graham-Rowe, Duncan. Radio emerges from the electronic soup. New Scientist. vol.175, no.2358, p.19 (August 31, 2002). Available online at newscientistnewsnews. jspidns99992732 . See also: Bird, Jon and Paul Layzell. The evolved radio and its implications for modelling the evolution of novel sensors. In Proceedings of the 2002 Congress on Evolutionary Computation. p.1836-1841. Graham-Rowe, Duncan. Electronic circuit evolves from liquid crystals. New Scientist. vol.181, no.2440, p.21 (March 27, 2004). Haas, O. C.L. K. J. Burnham and J. A. Mills. On improving physical selectivity in the treatment of cancer: A systems modelling and optimisation approach. Control Engineering Practice. vol.5, no.12, p.1739-1745 (December 1997). Hanne, Thomas. Global multiobjective optimization using evolutionary algorithms. Journal of Heuristics. vol.6, no.3, p.347-360 (August 2000). Haupt, Randy and Sue Ellen Haupt. Practical Genetic Algorithms. John Wiley amp Sons, 1998. He, L. and N. Mort. Hybrid genetic algorithms for telecommunications network back-up routeing. BT Technology Journal. vol.18, no.4, p. 42-50 (Oct 2000). Holland, John. Genetic algorithms. Scientific American. July 1992, p. 66-72. Hughes, Evan and Maurice Leyland. Using multiple genetic algorithms to generate radar point-scatterer models. IEEE Transactions on Evolutionary Computation. vol.4, no.2, p.147-163 (July 2000). Jensen, Mikkel. Generating robust and flexible job shop schedules using genetic algorithms. IEEE Transactions on Evolutionary Computation. vol.7, no.3, p.275-288 (June 2003). Kewley, Robert and Mark Embrechts. Computational military tactical planning system. IEEE Transactions on Systems, Man and Cybernetics, Part C - Applications and Reviews. vol.32, no.2, p.161-171 (May 2002). Kirkpatrick, S. C. D. Gelatt and M. P. Vecchi. Optimization by simulated annealing. Science. vol.220, p.671-678 (1983). Koza, John, Forest Bennett, David Andre and Martin Keane. Genetic Programming III: Darwinian Invention and Problem Solving. Morgan Kaufmann Publishers, 1999. Koza, John, Martin Keane, Matthew Streeter, William Mydlowec, Jessen Yu and Guido Lanza. Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Kluwer Academic Publishers, 2003. See also: Koza, John, Martin Keane and Matthew Streeter. Evolving inventions. Scientific American. February 2003, p. 52-59. Keane, A. J. and S. M. Brown. The design of a satellite boom with enhanced vibration performance using genetic algorithm techniques. In Adaptive Computing in Engineering Design and Control 96 - Proceedings of the Second International Conference. I. C. Parmee (ed), p.107-113. University of Plymouth, 1996. See also: Petit, Charles. Touched by nature: Putting evolution to work on the assembly line. U. S. News and World Report. vol.125, no.4, p.43-45 (July 27, 1998). Available online at genetic-programmingpublishedusnwr072798.html . Lee, Yonggon and Stanislaw H. Zak. Designing a genetic neural fuzzy antilock-brake-system controller. IEEE Transactions on Evolutionary Computation. vol.6, no.2, p.198-211 (April 2002). Lemley, Brad. Machines that think. Discover. January 2001, p.75-79. Mahfoud, Sam and Ganesh Mani. Financial forecasting using genetic algorithms. Applied Artificial Intelligence. vol.10, no.6, p.543-565 (1996). Mitchell, Melanie. An Introduction to Genetic Algorithms. MIT Press, 1996. Naik, Gautam. Back to Darwin: In sunlight and cells, science seeks answers to high-tech puzzles. The Wall Street Journal. January 16, 1996, p. A1. Obayashi, Shigeru, Daisuke Sasaki, Yukihiro Takeguchi, and Naoki Hirose. Multiobjective evolutionary computation for supersonic wing-shape optimization. IEEE Transactions on Evolutionary Computation. vol.4, no.2, p.182-187 (July 2000). Petzinger, Thomas. At Deere they know a mad scientist may be a firms biggest asset. The Wall Street Journal. July 14, 1995, p. B1. Porto, Vincent, David Fogel and Lawrence Fogel. Alternative neural network training methods. IEEE Expert. vol.10, no.3, p.16-22 (June 1995). Rao, Srikumar. Evolution at warp speed. Forbes. vol.161, no.1, p.82-83 (January 12, 1998). Rizki, Mateen, Michael Zmuda and Louis Tamburino. Evolving pattern recognition systems. IEEE Transactions on Evolutionary Computation. vol.6, no.6, p.594-609 (December 2002). Robin, Franck, Andrea Orzati, Esteban Moreno, Otte Homan, and Werner Bachtold. Simulation and evolutionary optimization of electron-beam lithography with genetic and simplex-downhill algorithms. IEEE Transactions on Evolutionary Computation. vol.7, no.1, p.69-82 (February 2003). Sagan, Carl. Brocas Brain: Reflections on the Romance of Science. Ballantine, 1979. Sambridge, Malcolm and Kerry Gallagher. Earthquake hypocenter location using genetic algorithms. Bulletin of the Seismological Society of America. vol.83, no.5, p.1467-1491 (October 1993). Sasaki, Daisuke, Masashi Morikawa, Shigeru Obayashi and Kazuhiro Nakahashi. Aerodynamic shape optimization of supersonic wings by adaptive range multiobjective genetic algorithms. In Evolutionary Multi-Criterion Optimization: First International Conference, EMO 2001, Zurich, Switzerland, March 2001: Proceedings. K. Deb, L. Theile, C. Coello, D. Corne and E. Zitler (eds). Lecture Notes in Computer Science, vol.1993, p.639-652. Springer-Verlag, 2001. Sato, S. K. Otori, A. Takizawa, H. Sakai, Y. Ando and H. Kawamura. Applying genetic algorithms to the optimum design of a concert hall. Journal of Sound and Vibration. vol.258, no.3, p. 517-526 (2002). Schechter, Bruce. Putting a Darwinian spin on the diesel engine. The New York Times. September 19, 2000, p. F3. Srinivas, N. and Kalyanmoy Deb. Multiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation. vol.2, no.3, p.221-248 (Fall 1994). Soule, Terrence and Amy Ball. A genetic algorithm with multiple reading frames. In GECCO-2001: Proceedings of the Genetic and Evolutionary Computation Conference. Lee Spector and Eric Goodman (eds). Morgan Kaufmann, 2001. Available online at cs. uidaho. edu Tang, K. S. K. F. Man, S. Kwong and Q. He. Genetic algorithms and their applications. IEEE Signal Processing Magazine. vol.13, no.6, p.22-37 (November 1996). Weismann, Dirk, Ulrich Hammel, and Thomas Baumlck. Robust design of multilayer optical coatings by means of evolutionary algorithms. IEEE Transactions on Evolutionary Computation. vol.2, no.4, p.162-167 (November 1998). Williams, Edwin, William Crossley and Thomas Lang. Average and maximum revisit time trade studies for satellite constellations using a multiobjective genetic algorithm. Journal of the Astronautical Sciences. vol.49, no.3, p.385-400 (July-September 2001). Zitzler, Eckart and Lothar Thiele. Multiobjective evolutionary algorithms: a comparative case study and the Strength Pareto approach. IEEE Transactions on Evolutionary Computation. vol.3, no.4, p.257-271 (November 1999).Genetic Testing Aetna considers genetic testing medically necessary to establish a molecular diagnosis of an inheritable disease when all of the following are met: The member displays clinical features, or is at direct risk of inheriting the mutation in question (pre-symptomatic) and The result of the test will directly impact the treatment being delivered to the member and After history, physical examination, pedigree analysis, genetic counseling, and completion of conventional diagnostic studies, a definitive diagnosis remains uncertain, and one of the following diagnoses is suspected (this list is not all-inclusive): Achondroplasia (FGFR3) Albinism Alpha-1 antitrypsin deficiency (SERPINA1) Alpha thalassemia Hb Bart hydrops fetalis syndromeHbH disease (HBA1HBA2, alpha globin 1 and alpha globulin 2) Angelman syndrome (GABRA, SNRPN Bardet-Biedl syndrome Beta thalassemia (beta globin) Bloom syndrome (BLM) CADASIL (see below) Canavan disease (ASPA (aspartoacylase A)) Charcot-Marie Tooth disease (PMP-22) Classical lissencephaly Congenital adrenal hyperplasia21 hydroxylase deficiency (CYP21A2) Congenital amegakaryocytic thrombocytopenia Congenital central hypoventilation syndrome (PHOX2B) Congenital muscular dystrophy type 1C (MDC1C) (FKRP (Fukutin related protein)) Crouzon syndrome (FGFR2, FGFR3) Cystic fibrosis (CFTR) (see below) Dentatorubral-pallidoluysian atrophy DuchenneBecker muscular dystrophy (dystrophin) Dysferlin myopathy Ehlers-Danlos syndrome Emery-Dreifuss muscular dystrophy (EDMD1, 2, and 3) Fabry disease Factor V Leiden mutation (F5 (Factor V)) Factor XIII deficiency, congenital (F13 (Factor XIII beta globulin)) Familial adenomatous polyposis coli (APC) (see below) Familial dysautonomia (IKBKAP) Familial hypocalciuric hypercalcemia (see below) Familial Mediterranean fever (MEFV) Fanconi anemia (FANCC, FANCD) Fragile X syndrome, FRAXA (FMR1) (see below) Friedreichs ataxia (FRDA (frataxin)) Galactosemia (GALT) Gaucher disease (GBA (acid beta glucosidase)) Gitelmans syndrome Hemoglobin E thalassemia Hemoglobin S andor C Hemophilia AVWF (F8 ( Factor VIII)) Hemophilia B (F9 (Factor IX)) Hereditary amyloidosis (TTR variants) Hereditary deafness (GJB2 (Connexin-26, Connexin-32 )) Hereditary hemorrhagic telangiectasia (HHT) Hereditary hemochromatosis (HFE) (see below) Hereditary leiomyomatosis and renal cell cancer (HLRCC) syndrome (fumarate hydratase (FH) gene) Hereditary neuropathy with liability to pressure palsies (HNPP) Hereditary non-polyposis colorectal cancer (HNPCC) (MLH1, MSH2, MSH6. MSI) ( see below) Hereditary pancreatitis (PRSS1) (see below) Hereditary paraganglioma (SDHD, SDHB) Hereditary polyposis coli (APC) Hereditary spastic paraplegia 3 (SPG3A) and 4 (SPG4, SPAST) Huntingtons disease (HTT, HD (Huntington)) Hypochondroplasia (FGFR3) Hypertrophic cardiomyopathy (see below) Jackson-Weiss syndrome (FGFR2) Joubert syndrome Kallmann syndrome (FGFR1) Kennedy disease (SBMA) Leber hereditary optic neuropathy (LHON) Leigh Syndrome and NARP (neurogenic muscle weakness, ataxia, and retinitis pigmentosa) Long QT syndrome (see below) Limb girdle muscular dystrophy (LGMD1, LGMD2) (FKRP (Fukutin related protein)) Malignant hyperthermia (RYR1) Maple syrup urine disease (branched-chain keto acid dehydrogenase E1) Marfanrsquos syndrome (TGFBR1, TGFBR2) McArdles disease Medium chain acyl coA dehydrogenase deficiency (ACADM) Medullary thyroid carcinoma MELAS (mitochondrial encephalomyopathy with lactic acidosis and stroke-like episodes) (MTTL1, tRNAleu) Meckel-Gruber syndrome Mucolipidosis type IV (MCOLN1, mucolipin 1) Mucopolysaccharidoses type 1 (MPS-1) Muenke syndrome (FGFR3) Multiple endocrine neoplasia type 1 Muscle-Eye-Brain disease (POMGNT1) MYH-associated polyposis (MYH) (see below) Myoclonic epilepsy (MERRF) (MTTK (tRNAlys)) Myotonic dystrophy (DMPK, ZNF-9) Neimann-Pick disease, type A (SMPD1, sphingomyelin phosphodiesterase) Nephrotic syndrome, congenital (NPHS1, NPHS2) Neurofibromatosis type 1 (NF1, neurofibromin) Neurofibromatosis type 2 (Merlin) Neutropenia, congenital cyclic Nephronophthisis Phenylketonuria (PAH) Pfeiffer syndrome (FGFR1) Prader-Willi-Angelman syndrome (SNRPN, GABRA5, NIPA1, UBE3A, ANCR, GABRA ) Primary dystonia (TOR1A (DYT1)) Prothrombin (F2 (Factor II, 20210Ggt A mutation)) Pyruvate kinase deficiency (PKD) Retinoblastoma (Rh) Rett syndrome (FOXG1, MECP2) Saethre-Chotzen syndrome (TWIST, FGFR2) SHOX-related short stature (see below) Smith-Lemli-Opitz syndrome Spinal muscular atrophy (SMN1) Spinocerebellar ataxia (SCA types 1, 2, 3 (MJD), 6 (CACNA1A), 7, 8, 10, 17 and DRPLA) Tay-Sachs disease (HEXA (hexosaminidase A)) Thanatophoric dysplasia (FGFR3) Von Gierke disease (G6PC, Glycogen storage disease, Type 1a) Von Hippel-Lindau syndrome (VHL) Walker-Warburg syndrome (POMGNT1) 22q11 deletion syndromes (DCGR (CATCH-22)) Medically necessary if results of the adrenocortical profile following cosyntropin stimulation test are equivocal or for purposes of genetic counseling. Electrophoresis is the appropriate initial laboratory test for individuals judged to be at-risk for a hemoglobin disorder. In the absence of specific information regarding advances in the knowledge of mutation characteristics for a particular disorder, the current literature indicates that genetic tests for inherited disease need only be conducted once per lifetime of the member. Note. Genetic testing of Aetna members is excluded from coverage under Aetnas benefit plans if the testing is performed primarily for the medical management of other family members who are not covered under an Aetna benefit plan. In these circumstances, the insurance carrier for the family members who are not covered by Aetna should be contacted regarding coverage of genetic testing. Occasionally, genetic testing of tissue samples from other family members who are not covered by Aetna may be required to provide the medical information necessary for the proper medical care of an Aetna member. Aetna covers genetic testing for heritable disorders in non-Aetna members when all of the following conditions are met: The information is needed to adequately assess risk in the Aetna member and The information will be used in the immediate care plan of the Aetna member and The non-Aetna members benefit plan, if any, will not cover the test (a copy of the denial letter from the non-Aetna members benefit plan must be provided). Aetna may also request a copy of the certificate of coverage from the non-members health insurance plan if: (i) the denial letter from the non-members insurance carrier fails to specify the basis for non-coverage (ii) the denial is based on a specific plan exclusion or (iii) the genetic test is denied by the non-members insurance carrier as not medically necessary and the medical information provided to Aetna does not make clear why testing would not be of significant medical benefit to the non-member. Medical Necessity Criteria for Specific Genetic Tests : Adenosis polyposis coli (APC): Aetna considers adenosis polyposis coli (APC) genetic testing medically necessary for either of the following indications: Members with greater than 10 colonic polyps or Members with a desmoid tumor, hepatoblastoma, or cribriform-morular variant of papillary thyroid cancer or Members with 1st-degree relatives (i. e. siblings, parents, and offspring) diagnosed with familial adenomatous polyposis (FAP) or with a documented APC mutation. The specific APC mutation should be identified in the affected 1st-degree relative with FAP prior to testing the member, if feasible. Full sequence APC genetic testing is considered medically necessary only when it is not possible to determine the family mutation first. Aetna considers APC genetic testing experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Aetna considers DNA testing for CADASIL medically necessary for either of the following indications: Pre-symptomatic individuals where there is a family history consistent with an autosomal dominant pattern of inheritance and there is a known mutation in an affected member of the family or Symptomatic individuals who have a family history consistent with an autosomal dominant pattern of inheritance of this condition (clinical signs and symptoms of CADASIL include stroke, cognitive defects andor dementia, migraine, and psychiatric disturbances). Aetna considers CADASIL genetic testing experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Catecholaminergic polymorphic ventricular tachycardia (CPVT): Aetna considers genetic testing for CPVT medically necessary for the following indications: Persons with a 1st degree relative (i. e. parent, full-sibling, child) with a defined CPVT mutation ( Note. Test for known familial mutation) or Persons who display exercise-, catecholamine-, or emotion-induced PVT or ventricular fibrillation, occurring in a structurally normal heart. Aetna considers genetic carrier testing for cystic fibrosis medically necessary for members in any of the following groups: Couples seeking prenatal care or Couples who are planning a pregnancy or Persons with a family history of cystic fibrosis or Persons with a 1st degree relative identified as a cystic fibrosis carrier or Reproductive partners of persons with cystic fibrosis or Positive newborn screen for CF, or signs and symptoms of CF are present, and sweat chloride test is positive, intermediate, inconclusive or cannot be performed (eg, infant is too young to produce adequate volumes of sweat). Aetna considers genetic carrier testing for cystic fibrosis experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Aetna considers a core panel of 25 mutations that are recommended by the American College of Medical Genetics (ACMG) medically necessary for cystic fibrosis genetic testing. The standard CF transmembrane regulator (CFTR)mutation panel is as follows (Available at: acmg. net ): Aetna considers experimental and investigational screening for cystic fibrosis mutations that extend beyond the standard mutation panel recommended by the ACMG. Factor V Leiden: Aetna considers Factor V Leiden genetic testing medically necessary for members with an abnormal activated protein C (APC) resistance assay result and any of the following indications: Asymptomatic female who is planning pregnancy or is currently pregnant and not taking anticoagulation therapy and either of the following: First-degree blood relative (ie, parent, full-sibling, child) with a history of high-risk thrombophilia (eg, antithrombin deficiency, double heterozygosity or homozygosity for FVL or prothrombin G20210A) or First-degree blood relative (ie, parent, full-sibling, child) with venous thromboembolism (VTE) before age 50 years or First unprovoked (eg, from an unknown cause) VTE at any age (especially age less than 50 years) or Individual with a first VTE AND a first-degree blood family member (ie, parent, full-sibling, child) with a VTE occurring before age 50 years or Individual with history of recurrent VTE or Venous thrombosis at unusual sites (eg, cerebral, mesenteric, portal and hepatic veins) or VTE associated with the use of oral contraceptives or hormone replacement therapy (HRT) or VTE during pregnancy or the puerperium. Aetna considers Factor V Leiden genetic testing experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Aetna considers Factor V HR2 allele DNA mutation analysis experimental and investigational because its effectiveness has not been established. Prothrombin G20210A Thrombophilia (F2 Gene) Aetna considers F2 gene testing for prothrombin G20210A thrombophilia when the following criteria are met: Asymptomatic female who is planning pregnancy or is currently pregnant and not taking anticoagulation therapy, and either of the following: First-degree blood relative (ie, parent, full-sibling, child) with a history of high-risk thrombophilia (eg, antithrombin deficiency, double heterozygosity or homozygosity for FVL or prothrombin G20210A) or First-degree blood relative (ie, parent, full-sibling, child) with VTE before age 50 years or First unprovoked (eg, from an unknown cause) VTE at any age (especially age less than 50 years) or Individual with a first VTE and a first-degree blood family member (ie, parent, full-sibling, child) with a VTE occurring before age 50 years or Individual with history of recurrent VTE or Venous thrombosis at unusual sites (eg, cerebral, mesenteric, portal and hepatic veins) or VTE associated with the use of oral contraceptives or hormone replacement therapy (HRT) or VTE during pregnancy or the puerperium. Aetna considers F2 gene testing experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Familial nephrotic syndrome (NPHS1, NPHS2): Aetna considers genetic testing for an NPHS1 mutation medically necessary for children with congenital nephrotic syndrome (nephrotic syndrome appearing within the first month of life) who are of Finnish descent or who have a family history of congenital nephrotic syndrome. Genetic testing for NPHS1 mutations are considered experimental and investigational for screening other persons with nephrotic syndrome and for all other indications because its effectiveness for other indications other has not been established. Aetna considers genetic testing for an NPHS2 mutation medically necessary for children with steroid resistant nephrotic syndrome (SRNS) and for children who have a family history of SRNS. Genetic testing for NPHS2 is considered experimental and investigational for persons with steroid-responsive nephrotic syndrome and for all other indications because its effectiveness for indications other than the ones listed above has not been established. Aetna considers genetic testing for familial nephrotic syndrome experimental and investigational for all other indications. Aetna considers genetic testing of the FMR1 gene medically necessary for members in any of the following risk categories where the results of the test will affect a members clinical management or reproductive decisions: Individuals with mental retardation, developmental delay, autism or primary ovarian insufficiency (POI) (also known as premature ovarian failure) or Individual with progressive cerebellar ataxia and intention tremor in individuals older than 50 years of age Individuals planning a pregnancy who have either of the following: A family history of fragile X syndrome, or A family history of unexplained mental retardation, developmental delay, autism or primary ovarian insufficiency (POI) or Fetuses of known carrier mothers. Prenatal testing of a fetus by amniocentesis or chorionic villus sampling is indicated following a positive Fragile X carrier test in the mother. POI is defined as female younger than 40 years of age with FSH levels in the postmenopausal range and at least three months of amenorrhea, oligomenorrhea or dysfunctional uterine bleeding. Aetna considers Fragile X DNA testing medically necessary for members with a negative cytogenetic test for fragile X if they have any physical or behavioral characteristics of fragile X syndrome and have a family history of fragile X syndrome or undiagnosed mental retardation. Aetna considers Fragile X DNA testing medically necessary for members with a phenotype that is not typical for fragile X syndrome who have a cytogenetic test that is positive for fragile X. Aetna considers population-based fragile X syndrome screening of individuals who are not in any of the above-listed risk categories experimental and investigational because its effectiveness for indications other than the ones listed above has not been established. Aetna considers genetic testing for hemoglobinopathies and thalassemias (includes, but not limited to: Sickle Cell Anemia HBB Gene, Alpha Thalassemia HBA1HBA2 Genes and Beta Thalassemia HBB Gene) for couples planning pregnancy or seeking prenatal care when the following criteria are met: Individual to be tested has a family history of a hemoglobinopathy or Individual to be tested has an affected or carrier family member with a known mutation or Individual to be tested is suspected to have a hemoglobinopathy based on results of a complete blood count (CBC) and hemoglobin analysis (by electrophoresis, high performance liquid chromatography HPLC or isoelectric focusing). Aetna considers genetic testing for HFE gene mutations medically necessary for persons who meet all of the following criteria: Member who has symptoms consistent with iron overload and m ember who has 2 consecutive transferrin saturations of 45 or more or 2 consecutive or elevated ferritins (ie, gt200 ngmL in men gt150 ngmL in women) or Member has a first degree blood relative (i. e. parent, full-sibling, child) diagnosed with hereditary hemochromatosis or Member has a first degree blood relative with known HFE sequence variants consistent with HH. Genetic testing for hereditary hemochromatosis is considered experimental and investigational for general population screening and for all other indications because its effectiveness for indications other than the ones listed above has not been established. Hereditary non-polyposis colorectal cancer (HNPCC)Lynch syndrome (LS): Aetna considers genetic testing for HNPCC (MLH1, MSH2, MSH6, PMS2, EPCAM sequence analysis) medically necessary for members who meet any one of the following criteria: Member meets Amsterdam II criteria or revised Bethesda guidelines (see appendix) or Member is diagnosed with endometrial cancer before age 50 years or Member has a 1st - or 2nd-degree relative with a disease confirmed to be caused by a HNPCC mutation (genes MLH1, MSH2, MSH6, PMS2, EPCAM) upon testing of the 1st - or 2nd-degree relative or Member has ge5 risk of LS on a validated mutation prediction model (eg, MMRpro, PREMM1,2,6, MMRpredict). The PREMM1,2,6 model can be used online at premm. dfci. harvard. edu and the HNPCC predict model is available for online use at hnpccpredict. hgu. mrc. ac. uk. MMRpro is available for free download at httpwww4.utsouthwestern. edubreasthealthcagene. Aetna considers microsatellite instability (MSI) testing or immunohistochemical (IHC) analysis of tumors medically necessary as an initial test in persons with colorectal or endometrial cancer in order to identify those persons who should proceed with HNPCC mutation analysis. Hereditary pancreatitis (PRSS1): Aetna considers genetic testing for hereditary pancreatitis (PRSS1 mutation) medically necessary in symptomatic persons with any of the following indications: A family history of pancreatitis in a 1st-degree (parent, sibling, child) or 2nd-degree (aunt, uncle, grandparent) relative or An unexplained episode of documented pancreatitis occurring in a child that has required hospitalization, and where there is significant concern that hereditary pancreatitis should be excluded or Recurrent (2 or more separate, documented episodes with hyper-amylasemia) attacks of acute pancreatitis for which there is no explanation (anatomical anomalies, ampullary or main pancreatic strictures, trauma, viral infection, gallstones, alcohol, drugs, hyperlipidemia, etc.) or Unexplained (idiopathic) chronic pancreatitis. This policy is based upon guidelines from the Consensus Committees of the European Registry of Hereditary Pancreatic Diseases, the Midwest Multi-Center Pancreatic Study Group and the International Association of Pancreatology (Ellis et al, 2001). Aetna considers genetic testing for hereditary pancreatitis experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Long QT syndrome: Aetna considers genetic testing for long QT syndrome medically necessary for either of the following: Persons with a prolonged QT interval on resting electrocardiogram (a corrected QT interval (QTc) of 470 msec or more in males and 480 msec or more in females) without an identifiable external cause for QTc prolongation (such as heart failure, bradycardia, electrolyte imbalances, certain medications and other medical conditions) or Persons with 1st-degree blood relatives (full-siblings, parents, offspring) with a defined LQT mutation ( Note. Test for known familial mutation). Aetna considers genetic testing for long QT syndrome experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Malignant Hyperthermia Susceptibility: Aetna considers genetic testing for malignant hyperthermia susceptibility (MHS) medically necessary for either of the following indications: screening clinically confirmed MHS patients for variants in the RYR1 gene that are considered causative for MH by the European Malignant Hyperthermia Group (EMHG) to facilitate predictive testing in at-risk relatives. screening at-risk relatives of patients with clinically confirmed MHS for a known familial variant in the RYR1 gene that is considered causative for MH by the EMHG. Aetna considers genetic testing for malignant hyperthermia susceptibility (MHS) experimental and investigational for all other indications. Aetna considers genetic testing for central core disease (CCD) experimental and investigational because there is inadequate evidence in the peer-reviewed published literature regarding its effectiveness. Aetna considers testing for MUTYH mutations medically necessary for the following indications: Members with greater than 10 colonic polyps or Members meeting criteria for serrated polyposos syndrome (SPS) (see below) with at least some adenomas or Members 1st-degree relatives (i. e. siblings, parents, and offspring) with a documented deleterious MUTYH mutation. A clinical diagnosis of SPS is considered in an individual who meets at least one of the following empiric criteria: At least 5 serrated polyps proximal to the sigmoid colon with 2 or more of these being greater than 10 mm or Any number of serrated polyps proximal to the sigmoid colon in an individual who has a first-degree relative with serrated polyposis or Greater than 20 serrated polyps of any size, but distributed throughout the colon. Aetna considers MUTYH mutations testing experimental and investigational for any other indications because its effectiveness for indications other than the ones listed above has not been established. Primary dystonia (DYT1): Aetna considers genetic testing for DYT1 medically necessary for the following indications: Parents of children with an established DYT1 mutation, for purposes of family planning or Persons with onset of primary dystonia other than focal cranial-cervical dystonia after age 30 years who have a affected relative with early onset (before 30 years) or Persons with primary dystonia with onset before age 30 years. Aetna considers DYT-1 testing experimental and investigational for all other indications, including the following because its effectiveness for indications other than the ones listed above has not been established: Asymptomatic individuals (other than parents of affected children), including those with affected family members (genetic testing for dystonia (DYT-1) is not sufficient to make a diagnosis of dystonia unless clinical features show dystonia) or Persons with onset of symptoms after age 30 years who either have focal cranial-cervical dystonia or Persons with onset of symptoms after age 30 years who have no affected relative with early onset dystonia. This policy is adapted from guidelines from the European Federation of Neurological Societies. Spinal Muscular Atrophy Aetna considers genetic testing for SMN1 medically necessary for the following indications: Individual to be tested exhibits symptoms of SMA (eg, symmetrical proximal muscle weakness, absent or markedly decreased deep tendon reflexes) or Carrier screening when the individual to be tested is asymptomatic and any of the following criteria are met: Individual has a family history of SMA or SMA-like disease or Individual has an affected or carrier blood relative in whom a disease-causing SMA mutation has been identified. (Testing Strategy: Test for familial mutation) or Individual is the reproductive partner of an individual affected with or carrier of SMA or SMA-like disease or The prenatal diagnosis or preimplantation genetic diagnosis of SMA in the pregnancy of two known carriers. Note. SMA includes arthrogryposis multiplex congenita-SMA (AMC-SMA), congenital axonal neuropathy (CAN), SMA0, SMA I (Werdnig-Hoffmann disease), SMA II, SMA III (Kugelberg-Welander disease) and SMA IV. Aetna considers SMN2 gene testing for SMA experimental and investigational. Aetna considers genetic testing for spinal muscular atrophy (SMA) experimental and investigational for the identification of SMN1 deletion carriers in the general population and for all other indications because there is inadequate evidence in the published peer-reviewed clinical literature regarding its effectiveness. SHOX-related short stature: Aetna considers genetic testing for SHOX-related short stature medically necessary for children and adolescents with any of the following features: Above-average body mass index (BMI) or Cubitus valgus (increased carrying angle) or Dislocation of the ulna at the elbow or Increased sitting heightheight ratio or Madelung deformity of the forearm or Muscular hypertrophy or Reduced arm spanheight ratio or Short or bowed forearm. Aetna considers genetic testing for SHOX-related short stature experimental and investigational for all other indications because its effectiveness for indications other than the ones listed above has not been established. Hypertrophic cardiomyopathy (HCM): Aetna considers genetic testing for HCM medically necessary for individuals who meet the following criteria: Individual to be tested has been evaluated (eg, electrocardiogram ECG, echocardiography) and exhibits no clinical evidence of HCM and Individual has a 1st degree relative (i. e. parent, full-sibling, child) with a known pathogenic gene mutation. ( Note. Test for known familial mutation.). Aetna considers genetic testing for HCM experimental and investigational for all other indications because its effectiveness for indications other than the one listed above has not been established. Thoracic aortic aneurysms and dissections (TAAD) Aetna considers genetic testing for thoracic aortic aneurysms and dissections (TAAD) medically necessary for asymptomatic persons with an affected first-degree blood relative (i. e. parent, full-sibling, child) with a known deleterious or suspected deleterious mutation in a gene known to cause familial TAAD. (Testing strategy: Test for known familial mutation.) Genetic testing for thoracic aortic aneurysms and dissections (TAAD) is considered experimental and investigational for any other indication, including but not limited to patients clinically diagnosed with TAAD, with a positive family history of the disorder, and for whom a genetic syndrome has been excluded. Loeys-Dietz Syndrome (LDS) Aetna considers TGFBR1 and TGFBR2 gene testing for LDS medically necessary when the following criteria are met: Asymptomatic individual who has an affected first-degree blood relative (ie, parent, full-sibling, child) with a known deleterious or suspected deleterious mutation (Testing strategy: Test for known familial mutation) or To confirm or establish a diagnosis of LDS in an individual with characteristics of LDS (eg, aorticarterial aneurysmstortuosity, arachnodactyly, bicuspid aortic valve and patent ductus arteriosus, blue sclerae, camptodactyly, cerebral, thoracic or abdominal arterial aneurysms andor dissections, cleft palatebifid uvula, club feet, craniosynostosis, easy bruising, joint hypermobility, ocular hypertelorism, pectus carinatum or pectus excavatum, scoliosis, talipes equinovarus, thin skin with atrophic scars, velvety and translucent skin, widely spaced eyes) (Testing strategy: Begin with sequence analysis of TGFBR2. If a mutation is not identified, proceed with sequence analysis of TGFBR1). Arrhythmogenic Right Ventricular DysplasiaCardiomyopathy (ARVDC) Genetic testing for Arrhythmogenic Right Ventricular DysplasiaCardiomyopathy (ARVDC) is considered medically necessary for the following indications: Testing for sequence variants in the DSG2, DSP, and PKP2 genes in probands with ITF-confirmed ARVDC to facilitate genetic screening for ARVDC in at-risk relatives or Testing for a known familial sequence variant in the DSG2, DSP, and PKP2 genes for at-risk relatives of probands with ITF-confirmed ARVDC, who are either asymptomatic or have ARVDC symptoms that fails to meet the ITF diagnostic criteria. Genetic testing for ARVDC is considered experimental and investigational for all other indications. Genetic testing for COL1A1 and COL1A2 gene sequencing in the management of osteogenesis imperfecta types I to IV, is considered medically necessary for the following indications: Genetic testing for sequence variants in COL1A12 to confirm the presence of mosaicism in the asymptomatic parent of a child with OI caused by sequence variants in COL1A12 for reproductive decision making purposes or Preimplantation genetic diagnosis or prenatal diagnosis for sequence variants in COL1A12 in couples in which 1 or both members have OI caused by sequence variants in COL1A12. Genetic testing for COL1A1 and COL1A2 gene sequencing is considered experimental and investigational in any other circumstances, including, but not limited to: Testing for sequence variants in COL1A12 to confirm diagnosis of OI when clinical and radiological examination and family history provide adequate information for diagnosis of OI. Genetic testing for sequence variants in COL1A12 for the diagnosis of OI when clinical and radiological examination and family history provide inadequate information for diagnosis of OI. Genetic testing for sequence variants in COL1A12 in children diagnosed with OI to aid in reproductive planning for unaffected couples seeking to have additional children. Genetic testing for COL1A12 is considered experimental and investigational for all other indications. Genetic testing for neurofibromatosis is considered medically necessary for persons who meet all of the following criteria: Displays a sign of or has clinical features of the NF and Has a 50 risk of inheriting NF (pre-symptomatic) and A definitive diagnosis remains uncertain despite a complete familypersonal history, physical examination and conventional diagnostic studies and Confirmation of the diagnosis will impact treatment. Genetic testing for neurofibromatosis is considered experimental and investigational for all other indications. Aetna considers FNB1 gene testing for Marfan syndrome (MFS) medically necessary for the following indications: Marfan syndrome is suspected, but the clinical diagnostic criteria (refer to Table 1) have not led to a confirmed diagnosis of Marfan syndrome, and both of the following criteria are met: Absence of confirmed family history of Marfan syndrome and Presence of ectopia lentis with any aortic dilation or significant aortic dilation (Z score greater than or equal to two)dissection only Testing strategy. Begin with sequencing of FBN1 gene. Deletionduplication analysis of FBN1 gene is considered medically necessary if a mutation is not identified by sequence analysis. Testing of an asymptomatic individual who has an affected first-degree blood relative (i. e. parent, full-sibling, child) with a known deleterious or suspected deleterious mutation. (Testing strategy: Test for known familial mutation) or The prenatal diagnosis or PGD of Marfan syndrome in the offspring of patients with known disease-causing variants. Genetic testing for Marfan syndrome (MFS) is considered experimental and investigational for any other indications, including but not limited to: The use of FBN1 gene testing in the diagnostic evaluation of Marfan syndrome in patients exhibiting only minor features of the condition, according to the Ghent diagnostic criteria. The use of TGFBR2 gene testing to facilitate the diagnosis of Marfan syndrome in patients testing negative for FBN1 gene variants The use of TGFBR1 gene testing to facilitate the diagnosis of Marfan syndrome in patients testing negative for FBN1 and TGFBR2 gene variants. The use of Marfan syndrome gene testing in patients fulfilling the Ghent diagnostic criteria who will not be using the information for reproductive decision making or facilitating the diagnosis of Marfan syndrome in at-risk relatives. Table 1. Clinical Diagnostic Criteria for Marfan Syndrome In the absence of family history of Marfan syndrome, the presence of any of the following is diagnostic for Marfan syndrome: Aortic criterion (aortic diameter Z greater than or equal to two or aortic root dissection and ectopia lentis or Aortic criterion (aortic diameter Z greater than or equal to two or aortic root dissection and a systemic score greater than or equal to seven (refer to Table 2). In the presence of family history of Marfan syndrome, the presence of one of any of the following is diagnostic for Marfan syndrome: Aortic criterion (aortic diameter Z greater than or equal to two above 20 years old, Z greater than or equal to three below 20 years old or aortic root dissection) or Ectopia lentis or Systemic score greater than or equal to seven points (refer to Table 2) For criteria with an asterisk (), the diagnosis of Marfan syndrome can be made only in the absence of discriminating features of Shprintzen-Goldberg syndrome (SGS), LDS or vascular Ehlers-Danlos syndrome (vEDS). Table 2. Calculation of the Systemic Score for Marfan Syndrome: Aetna considers sequence analysis of COL3A1 gene for EDS vascular type when the following criteria are met: Asymptomatic individuals with a first-degree blood relative (ie, parent, full-sibling, child) who has been diagnosed with EDS vascular type in whom the disease-causing mutation has been identified or Symptomatic individuals to confirm a diagnosis of EDS vascular type when the following criteria are met: Presence or history of at least one of the following: Arterial rupture or First-degree blood relative (ie, parent, full-sibling, child) diagnosed with EDS vascular type or Intestinal rupture or Uterine rupture during pregnancy or Presence or history of at least two of the following: Acrogeria (aged appearance to extremities, particularly hands) or Arteriovenous carotid cavernous sinus fistula or Characteristic facial appearance (thin lips and philtrum, small chin, thin nose, large eyes) or Chronic joint subluxationsdislocations or Congenital dislocation of the hips or Early-onset varicose veins or Easy bruising (spontaneous or with minimal trauma) or Gingival recession or Hypermobility of small joints or Tendonmuscle rupture or Thin, translucent skin (especially noticeable on chestabdomen). Note. Testing begins with sequence analysis of COL3A1 gene. Biochemical (protein-based) testing may be considered for individuals with a negative sequencing result or when a sequence variant of unknown significance (VUS) is found. Genetic testing for EDS is considered experimental and investigational for all other indications, including the following: bull An at-risk (unaffected) individual when an affected family member has been tested for mutations and received a result of VUS (also known as unclassified variant or variant of uncertain significance) or bull Deletionduplication analysis of COL3A1 gene or bull EDS, arthrochalasia (COL1A1, COL1A2 genes) or bull EDS, dermatosparaxis (ADAMTS2 gene) or bull EDS, hypermobility type (TNXB gene) or bull EDS, kyphoscoliotic type (PLOD1) or bull EDS, classic type (COL5A1 and COL5A2 genes) or bull General population screening. Aetna considers genetic testing medically necessary to confirm a diagnosis of AS when the following criteria are met: Presence of gait ataxia andor tremulous movement of the limbs and Presence of severe developmental delay or intellectual disability and Presence of severe speech impairment and Presence of unique behavior with inappropriate happy demeanor that includes frequent laughing, smiling and excitability. AS Testing Strategy : bull Testing begins with DNA methylation analysis chromosome 15 (15q11-q13) bull If DNA methylation analysis is normal, then proceed to UBE3A sequence analysis bull If DNA methylation analysis is abnormal, then proceed to deletionduplication analysis FISHCMA bull If deletionduplication analysisFISHCMA is normal, then proceed to uniparental disomy (UPD) study bull If UPD is normal, then proceed to imprinting defect (ID) study. Aetna considers genetic testing medically necessary to confirm a diagnosis of PWS when the following criteria are met: Individual is age birth to two years with hypotonia with poor suck or Individual is age two to six years with the following characteristics: Hypotonia with history of poor suck and Global developmental delay (GDD) or Individual is age six to 12 years with the following characteristics: History of hypotonia with poor suck (hypotonia often persists) and Excessive eating with central obesity if uncontrolled or Individual is age 13 years to adulthood with the following characteristics: Cognitive impairment, usually mild intellectual disability and Excessive eating with central obesity if uncontrolled and PWS Testing Strategy : bull Testing begins with DNA methylation analysis bull If DNA methylation analysis is abnormal, then proceed to FISHCMA bull If FISHCMA is normal, then proceed to uniparental disomy (UPD) study bull If UPD is normal, then proceed to imprinting defect (ID) study. Aetna considers VHL gene testing medically necessary for VHL syndrome when the following criteria are met: Individual to be tested has a first - (ie, parent, full-sibling, child) or second-degree (ie, aunt, uncle, grandparent, grandchild, niece, nephew, half-sibling) blood relative with a known deleterious or suspected deleterious VHL gene mutation (Testing strategy: test for specific familial mutation) or Individual to be tested has a personal history of one or more of the following (Testing strategy: Testing begins with sequence analysis of the VHL gene. Deletion duplication analysis of the VHL gene may be considered for individuals with a negative sequencing analysis result): Clear cell renal cell carcinoma Endolymphatic sac tumor Epididymal or adnexal papillary cystadenoma Hemangioblastoma Multiple renal andor pancreatic cysts Pancreatic neuroendocrine tumors Pancreatic serous cystadenomas Pheochromocytoma or paraganglioma Retinal angioma. Cadherin-1 for hereditary diffuse gastric cancer Aetna considers genetic testing for cadherin-1 (e-cadherin, CDH1) mutations for hereditary diffuse gastric cancer (DGC) is considered medically necessary when when any of the following criteria is met: bull 2 gastric cases in a family, 1 confirmed diffuse gastric cancer (DGC) diagnosed before age 50 years or bull 3 confirmed cases of DGC in 1st - or 2nd-degree relatives independent of age or bull DGC diagnosed before age 40 years without a family history or bull Personal or family history of DGC and lobular breast cancer, 1 diagnosed before age 50 years. Cadherin 1 testing is considered experimental and investigational for confirming the clinical diagnosis of HDGC and for other indications because there is inadequate evidence in the peer-reviewed published clinical literature regarding its effectiveness. Aetna considers genetic testing of the HEXA gene medically necessary for carrier screening for Tay-Sachs disease for couples planning pregnancy or seeking prenatal care when any of the following criteria are met: Individual to be tested has an abnormal or inconclusive beta-hexosaminidase A enzyme activity or Individual to be tested has an affected or carrier family member in whom a mutation has been identified or Individual to be tested is of Ashkenazi Jewish descent or the reproductive partner of an individual of Ashkenazi Jewish descent or Individual to be tested is the reproductive partner of an individual affected with or carrier of Tay-Sachs disease. Note. Testing begins with a targeted mutation panel. If negative, sequence analysis may be considered. Maturity Onset Diabetes of the Young (MODY) Genetic testing for maturity-onset diabetes of the young (MODY) is considered medically necessary for the diagnosis of MODY2 or MODY3 in persons with hyperglycemia or non-insulin-dependent diabetes who have a family history of abnormal glucose metabolism in at least 2 consecutive generations, with the patient or ge 1 family members diagnosed before age 25. Genetic testing for maturity-onset diabetes of the young (MODY) is considered experimental and investigational for all other indications. Aetna considers genetic testing for Huntington disease (HD) medically necessary for either of the following indications: Predictive testing for CAG repeat length in asymptomatic individuals from families in which there is a history of HD to define risk of transmission or Prenatal testing for CAG repeat length in fetuses from families in which there is a history of HD. Genetic testing for Huntington disease is considered experimental and investigational for indications other than those listed above. Familial hypocalciuric hypercalcemia: Aetna considers familial hypocalciuric hypercalcemia (FHH) medically necessary in any of the following: Atypical cases where no family members are available for testing or Families with familial isolated hyperparathyroidism or Infants or children under 10 years of age in whom neonatal hyperparathyroidism, neonatal severe hyperparathyroidism, and FHH are the commonest causes of parathyroid hormone-dependent hypercalcemia or Individuals with overlap in the calciumcreatinine (CaCr) clearance ratio, namely between 0.01 and 0.02 or Individuals with the phenotype of FHH whose parents are both normocalcemic (i. e. FHH possibly caused by a de novo CaSR mutation). Spinocerebellar ataxia (SCA) Aetna considers genetic testing of SCA1 (ATXN1 gene), SCA2 (ATXN2 gene), SCA3 (ATXN3 gene), SCA6 (CACNA1A gene), SCA7 (ATXN7) and DRPLA (ATN1 gene) medically necessary to aid in the diagnosis of SCA when the following criteria are met: Individual to be tested exhibits signs and symptoms of SCA such as progressive gait and limb incoordination, imbalance, dysarthria and disturbances of eye movements and Nongenetic causes of ataxia have been excluded (eg, alcoholism, multiple sclerosis, primary or metastatic tumors or paraneoplastic diseases associated with occult carcinoma of the ovary, breast or lung, vascular disease, vitamin deficiencies) Note. Initally, testing for SCA1, SCA2, SCA3, SCA6, SCA7 and DRPLA are considered medically necessary. If results are normal, and a high index of suspicion remains for SCA based on clinical findings, testing for the following additional genes is considered medically necessary: SCA5 (SPTBN2) SCA8 (ATXN8ATXN8OS) SCA10 (ATXN10) SCA12 (PPP2R2B) SCA13 (KCNC3) SCA14 (PRKCG) SCA17 (TBP) and SCA27 (FGF14). Duchenne Muscular Dystrophy and Becker Muscular Dystrophy Aetna considers DMD gene testing medically necessary when the following criteria are met: Carrier screening when the individual to be tested is an asymptomatic female and has an affected blood relative in whom a disease-causing DMD or BMD mutation has been identified. (Testing Strategy: Test for known mutation) or Individual to be tested exhibits characteristic features of DMD or BMD (eg, progressive symmetric muscular weakness (proximal greater than distal) often with enlargement of calf muscles, wheelchair dependency before age 13 years for DMD and after age 16 years for BMD) and individual has elevated serum creatine kinase (CK) concentration. (Note: Normal value ranges may vary slightly among different labs. Some labs use different measurements or test different samples. In males with DMD, serum CK levels are gt10 times normal and in BMD five times normal. Some female carriers of DMD or BMD have levels two to 10 times normal). Note on Testing Strategy. Testing begins with deletionduplication analysis of DMD gene. Sequence analysis of DMD gene is considered medically necessary for individuals with a negative deletionduplication result. Myotonic dystrophy type 1 and type 2 Aetna considers genetic testing for myotonic dystrophy type 1 (DM1) (DMPK gene) and myotonic dystrophy type 2 (DM2) (CNBP gene) when the following criteria are met: Individual to be tested exhibits characteristic features of DM1 or DM2 (eg, muscle weakness, muscle pain, myotonia). (Testing Strategy: Targeted mutation analysis of DMPK andor CNBP gene. CNBP sequence analysis for DM2 is considered experimental and investigational) or Individual to be tested is asymptomatic and At least 18 years old and Has an affected first-degree blood relative (ie, parent, full-sibling, child) in whom a disease-causing DM1 or DM2 mutation has been identified. (Testing Strategy: Test for known familial mutation). PTEN Gene Testing Based upon guidelines from the National Comprehensive Cancer Network, PTEN gene testing is considered medically necessary in individuals with a suspected or known clinical diagnosis of Cowden syndrome or Bannayan-Riley-Ruvalcaba syndrome (BRR), or a known family history of a PTEN mutation who meet any of the following criteria: A relative with a known deleterious PTEN gene mutation or Personal history of any of the following: Bannayan RIley-Ruvalcaba syndrome or Adult Lhermitte-Duclos disease (LDD) or Autism-spectrum disorder and macrocephaly or At least 2 biopsy-proven trichilemmomas or Macrocephaly plus one other major criteria or Three major criteria without macrocephaly or One major and at least three minor criteria or Four or more minor criteria. Family history of both of the following:At-risk relative (includes 1st-degree relative or more distant relatives if the 1st degree relative is unavailable or unwilling to be tested) with a clinical diagnosis of Cowden syndrome or BRR (no previous genetic testing) and One major or two minor criteria in the at-risk relative. Criteria for PTEN genetic testing purposes are: Breast cancer Mucocutaneous lesions One biopsy-proven trichilemmoma Multiple palmoplantar keratoses Multi-focal or extensive oral mucosal papillomatosis Multiple cutaneous facial papules (often verrucous) Macular pigmentation of glans penis Macrocephaly (97th percentile or greater 58 cm in adult women, 60 cm in adult men) Endometrial cancer Follicular thyroid cancer Multiple GI hamartomas or ganglioneuromas Colon cancer Esophageal glycogenic acanthosis (bull3) Papillary or follicular variant of papillary thyroid cancer Testicular lipomatosis Vascular anomalies (including multiple intracranial developmental venous anomalies). Other thyroid lesions (e. g. adenoma, nodule(s), goiter) Mental retardation (IQ lt 75) Autism spectrum disorder Single gastrointestinal hamartoma or ganglioneuroma Lipomas Renal cell carcinoma. Note. Insufficient evidence exists in the literature to include fibrocystic disease of the breast, fibromas, and uterine fibroids as diagnostic criteria. Li-Fraumeni syndrome (TP53 gene) TP53 gene testing is considered medically necessary for individuals with a suspected or known clinical diagnosis of Li-Fraumeni syndrome (LFS) or Li-Fraumeni-Like syndrome, or a known family history of a TP53 mutation. Testing is considered medically necessary in individuals whose medical andor family history is consistent with any of these: A relative with a known deleterious TP53 gene mutation or A diagnosis of classic Li-Fraumeni syndrome, defined by all of the following: Diagnosis of sarcoma before the age of 45 years and A parent, child, or full sibling diagnosed with cancer before the age of 45 years and An additional 1st - or 2nd-degree relative in the same lineage with cancer diagnosed before age 45 years, or a sarcoma at any age or Persons meeting Chompret criteria:Persons with a tumor from the LFS tumor spectrum (e. g. soft tissue sarcoma, osteosarcoma, brain tumor, breast cancer, adrenocortical carcinoma, leukemia, lung bronchoalveolar cancer) before age 46 years AND at least one first - or second-degree relative with any of the aforementioned cancers (other than breast cancer if the proband has breast cancer) before the age of 56 years or with multiple primaries at any age or Persons with multiple tumors (except multiple breast tumors), two of which belong to the LFS tumor spectrum with the initial cancer occurring before the age of 46 years or Individuals with adrenocortical carcinoma or choroid plexus carcinoma at any age of onset, regardless of family history or A diagnosis of breast cancer before age 35 years with a negative BRCA12 test especially if there is a family history of sarcoma, brain tumor, or adrenocortical carcinoma. Cancers associated with Li-Fraumeni syndrome include but are not limited to premenopausal breast cancer, bone and soft tissue sarcomas, acute leukemia, brain tumor, adrenocortical carcinoma, choroid plexus carcinoma, colon cancer, and early onset of other adenocarcinomas or other childhood cancers. Note that Ewing sarcoma is less likely to be related to Li-Fraumeni as compared to other sarcomas. Peutz-Jeghers syndromeSTK11 (LKB1) gene testing may be considered for individuals with a suspected or known clinical diagnosis of Peutz-Jeghers syndrome, or a known family history of a STK11 (LKB1) mutation. Testing may be considered for individuals whose medical andor family history is consistent with any of the following: A relative with a known deleterious STK11 (LKB1) gene mutation or A clinical diagnosis of PJS based on at least 2 of the following features: At least 2 PJS-type hamartomatous polyps of the small intestine Mucocutaneous hyperpigmentation of the mouth, lips, nose, eyes, genitalia, or fingers A family history of PJS. Ashkenazi Jewish Testing Panel Aetna considers medically necessary preconception or prenatal carrier screening for couples of Ashkenazi Jewish ancestry with a panel of genetic tests recommended by the American College of Medical Genetics (ACMG): Tay Sachs disease Canavan disease Cystic fibrosis Familial dysautonomia Familial hyperinsulinism Joubert syndrome Maple syrup urine disease Bloom syndrome Fanconi anemia Niemann-Pick disease Gaucher disease Glycogen storage disease type 1 Mucolipidosis IV Nemaline myopathy If only one partner is of Ashkenazi Jewish ancestry, then testing of that partner is considered medically necessary. Testing of the other partner is considered medically necessary only if the result of testing of the Ashkenazi Jewish partner is positive. Juvenile Polyposis Syndrome Genetic testing for juvenile polyposis syndrome (JPS) (BMPR1A and SMAD4) is considered medically necessary for persons who meet any of the following criteria: More than five pathologically confirmed juvenile polyps of the colorectum or Multiple pathologically confirmed juvenile polyps throughout the GI tract or Any number of pathologically confirmed juvenile polyps and a family history of juvenile polyps. Genetic testing for SMAD4 is considered medically necessary for infants with first degree relatives with JPS because of the risk of hereditary hemorrhagic telangiectasia. Based upon a consensus statement on familial hypercholesterolemia (FH) from the European Atherosclerosis Society (Nordestgaard, et al. 2013), genetic testing for familial hypercholesterolemia is considered medically necessary in persons who meet any of the following criteria: Persons with a definite or probable diagnosis of FH (DLCN gt 5) or First degree relative with a causative FH mutation or Plasma total cholesterol ge310 mgdL (gt 8 mmolL) in an adult or adult family member(s) or gt230 mgdL (gt 6 mmolL) in a child or child family member(s)) or Premature clinical CHD in the subject or family member(s) or Tendon xanthomas in the subject or family member(s) or Sudden premature cardiac death in a family member. Genetic testing for FH must begin with LDL-R mutations (approximately 75 of FH mutations), and if negative, reflex testing for Apo-B mutation (approximately 20) and then to PCSK9 mutation (approximately 5). The European Atherosclerosis Society recommends using the DLCN criteria (available at eurheartj. oxfordjournals. orgcontent34453478.long ) to establish the clinical diagnosis of FH. Among individuals with a definite or probable diagnosis of FH (DLCN gt 5), and particularly those with an obvious clinical diagnosis with xanthoma andor high cholesterol plus a family history of premature CHD, molecular genetic testing is strongly recommended. When a causative mutation is found in the index case, a genetic test should be offered to all first-degree relatives In cases of probable or definite FH, cascade screening using LDL cholesterol measurement in the family should be conducted and the subject referred for genetic testing if available, with subsequent cascade testing in the family if a causative mutation is found. Initial family members to be tested are biological first-degree relatives, namely parents, siblings, and children. Biological second-degree relatives including grandparents, grandchildren, uncles, aunts, nephews, nieces, and half-siblings should also be considered. Premature CHD signifies CHD before age 55 in males and before age 60 in female first-degree relatives, while in second-degree relatives, the corresponding ages are 50 and 55. Clinical CHD is defined by a history of an AMI, silent MI, unstable angina, coronary revascularization procedure (PCI or CABG)) or clinically significant atherosclerotic cardiovascular disease diagnosed by invasive or noninvasive testing (such as coronary angiography, stress test using treadmill, stress echocardiography, or nuclear imaging). Aetna considers genetic testing medically necessary for the diagnosis of Menkes disease in children with low serum copper (0 to 55 microgdL) and low serum ceruloplamin (10 to 160 mgL) concentrations Aetna considers genetic testing experimental and investigational for any of the following: Age-related macular degeneration Brugada syndrome Choroidal neovascularization (e. g. Retnagene) Congenital stationary night blindness Coronary artery disease (except testing for familial hypercholesterolemia) Costello syndrome (HRAS gene) Diamond-Blackfan anemia Dilated cardiomyopathy CMD1A Epidermolytic hyperkeratosis Essental tremor Familial Alzheimer disease Familial amyotrophic lateral sclerosis (SOD1 mutation) Familial cold urticariafamilial cold autoinflammatory syndrome Familial partial lipodystrophy FPLD2) Facioscapulohumeral muscular dystrophy (FSHD) Genetic testing panels for aortic dysfunction or dilation Genetic testing panels for colon cancer syndromes Genetic testing panels for nonsyndromic hereditary hearing loss (e, g. OtoScope, OtoGenome, OtoSeq) Genetic testing panels for X-linked intellectual disability Glioblastoma multiforme Hemiplegic migraine (HM) Hemophilia C (F11 (Factor XI)) Heterotaxy Klippel-Feil syndrome Lactose intolerance Left ventricle non compaction Legius syndrome (SPRED1 gene) Malignant melanoma (CDKN2Ap16) (e. g. Melaris) May-Hegglin anomaly Mccune-Albright syndrome Mowat-Wilson syndrome (ZEB2 gene) Multiple mitochondrial respiratory chain complex deficiencies Myoclonus-dystonia (epsilon-sarcoglycan gene (SCGE) deletion analysis) Migrainous vertigo Narcolepsy Next-generation sequencing for the diagnosis of learning disabilities in children Oculopharyngeal muscular dystrophy (OPMD) (PABPN1 gene) Osteoporosis Parkinson disease Polycystic kidney disease Polycystic liver disease Prostate cancer Seizure disorders (e. g. creatine transporter 1 sequencing for testing parents of individuals with seizures GABRG2 mutations and SCN1A deletion test for infantile febrile seizures Generalized epilepsy with febrile seizures plus (GEFS)) Sleep-walking Townes-Brocks syndrome (SALL1 gene) Type 2 diabetes (other than MODY) Very long chain acylCoA dehydrogenase deficiency (VLCADD) von Willebrand factor gene testing. Aetna considers the following tests experimental and investigational: deCODE AF deCODE BreastCancer deCODE Glaucoma deCODE MI deCODE PrCa deCODE T2 EpiSEEK test for epilepsyseizures Genetic Addiction Risk Score (GARSPREDXtrade) Home genetic tests MTHFR genetic testing for risk assessment of hereditary thrombophilia Multigene panels, often using next-generation sequencing (NGS), to predict risk of several cancers (e. g. BreastNext BROCA Cancer Risk Panel CancerNext CancerNext Expanded ColoNext Coloseq Invitae Common Hereditary Cancers Panel Invitae Gastric Cancer Panel Invitae Hereditary Cancer Syndromes Panel Invitae Hereditary Paraganglioma-Pheochromocytoma Panel Invitae Melanoma Panel Invitae Melanoma-Pancreatic Cancer Panel Invitae Multi-Cancer Panel Invitae Pancreatic Cancer Panel Invitae Thyroid Cancer Panel myRisk Hereditary Cancer Panel OncoGeneDx Comprehensive Cancer Panel OncoGeneDx Custom Panel OncoGeneDx HighModerate Risk Panel OncoGeneDx Pancreatic Cancer Panel OvaNext PancNext Panexia VistaSeq Hereditary Cancer Panel) Nuclear encoded mitochondrial genomic sequencing panel Plasminogen activator inhibitor-1 (PAI-1) for inherited thrombophilia POLG1 for mitochondrial recessive ataxia syndrome Single nucleotide polymorphisms for breast cancer (Oncovue, Brevagen) SLCO1B1 testing for statin induced myopathy SLIT1 testing for Asperger syndrome Whole exome sequencing Whole genome sequencing Whole mitochondrial genome sequencing. Genetic tests are laboratory studies of human deoxyribonucleic acid (DNA), chromosomes, genes or gene products to diagnose the presence of a genetic variation associated with a high risk of having or transmitting a specific genetic disorder. According to the American College of Medical Genetics (ACMG), an important issue in genetic testing is defining the scope of informed consent. The obligation to counsel and obtain consent is inherent in the clinician-patient and investigator-subject relationships. In the case of most genetic tests, the patient or subject should be informed that the test might yield information regarding a carrier or disease state that requires difficult choices regarding their current or future health, insurance coverage, career, marriage, or reproductive options. The objective of informed consent is to preserve the individuals right to decide whether to have a genetic test. This right includes the right of refusal should the individual decide the potential harm (stigmatization or undesired choices) outweighs the potential benefits. DNA-based mutation fanalysis is not covered for routine carrier testing for the diagnosis of Tay-Sachs and Sandhoff disease. Under accepted guidelines, diagnosis is primarily accomplished through biochemical assessment of serum, leukocyte, or platelet hexosaminidase A and B levels. The literature states that mutation analysis is appropriate for individuals with persistently inconclusive enzyme-based results and to exclude pseudo-deficiency (non-disease related) mutations in carrier couples. Testing of a member who is at substantial familial risk for being a heterozygote (carrier) for a particular detectable mutation that is recognized to be attributable to a specific genetic disorder is only covered for the purpose of prenatal counseling under plans with this benefit (see CPB 0189 - Genetic Counseling ). Confirmation by molecular analysis of inborn errors of metabolism by traditional screening methodologies (e. g. Guthrie microbiologic assays) is covered. Rigorous clinical evaluation should precede diagnostic molecular testing. In many instances, reliable mutation analysis requires accurate determination of specific allelic variations in a proband (affected individual in a family) before subsequent carrier testing in other at-risk family members can be accurately performed. Coverage of testing for individuals who are not Aetna members is not provided, except under the limited circumstances outlined in the policy section above. Hereditary non-polyposis colon cancer Hereditary non-polyposis colon cancer (HNPCC, Lynch syndrome) is one of the most common cancer predisposition syndromes affecting 1 in 200 individuals and accounting for 13 to 15 of all colon cancer. HNPCC is defined clinically by early-onset colon carcinoma and by the presence of other cancers such as endometrial, gastric, urinary tract and ovarian found in at least 3 first-degree relatives. Two genes have been identified as being primary responsible for this syndrome: hMLH1 at chromosome band 3p21 accounts for 30 of HNPCC2,3 and hMLH2 or FCC at chromosome band 2p22 which together with hMLH1 accounts for 90 of HNPCC. Unlike other genetic disorders that are easily diagnosed, the diagnosis of HNPCC relies on a very strongly positive family history of colon cancer. Specifically, several organizations have defined criteria that must be met to make the diagnosis of HNPCC. Although HNPCC lacks strict clinical distinctions that can be used to make the diagnosis, and therefore diagnosis is based on the strong family history, genetic testing is now available to study patients DNA for mutations to one of the mismatch repair genes. A mutation to one of these genes is a characteristic feature and confirms the diagnosis of HNPCC. Identifying individuals with this disease and performing screening colonoscopies on affected persons may help reduce colon cancer mortality. Microsatellite instability (MSI) is found in the colorectal cancer DNA (but not in the adjacent normal colorectal mucosa) of most individuals with germline mismatch repair gene mutations. In combination with immunohistochemistry for MSH2 and MLH1, MSI testing using the Bethesda markers should be performed on the tumor tissue of individuals putatively affected with HNPCC. A result of MSI-high in tumor DNA usually leads to consideration of germline testing for mutations in the MSH2 and MLH1 genes. Individuals with MSI-low or microsatellite stable (MSS) results are unlikely to harbor mismatch repair gene mutations, and further genetic testing is usually not pursued. HNPCC is caused by germline mutation of the DNA mismatch repair genes. Over 95 of HNPCC patients have mutations in either MLH1 or MSH2. As a result, sequencing for mismatch repair gene mutations in suspected HNPCC families is usually limited to MLH1 and MSH2 and sometimes MSH6 and PMS2. In general, MSH6 and PMS2 sequence analysis is performed in persons meeting aforementioned criteria for genetic testing for HNPCC, and who do not have mutations in either the MLH1 or MSH2 genes. In addition, single site MSH6 or PMS2 testing may be appropriate for testing family members of persons with HNPCC with an identified MSH6 or PMS2 gene mutation. HNPCC is a relatively rare disease, which makes screening the entire populace burdensome and ineffective. The incidence of this disease, even among the families of patients with colon cancer, is too small to make screening effective. (See also CPB 0189 - Genetic Counseling and CPB 0227 - BRCA Testing, Prophylactic Mastectomy, and Prophylactic Oophorectomy ). Familial adenomatous polyposis (FAP) Familial adenomatous polyposis (FAP) is caused by mutation of the adenomatous polyposis coli (APC) gene. According to guidelines from the American Gastroenterological Association (AGA, 2001), adenomatous polyposis coli gene testing is indicated to confirm the diagnosis of familial adenomatous polyposis, provide pre-symptomatic testing for at-risk members (1st degree relatives 10 years or older of an affected patient), confirm the diagnosis of attenuated familial adenomatous polyposis in those with more than 20 adenomas, and test those 10 years or older at risk for attenuated FAP. The AGA guidelines state that germline testing should first be performed on an affected member of the family to establish a detectable mutation in the pedigree. If a mutation is found in an affected family member, then genetic testing of at-risk members will provide true positive or negative results. The AGA guidelines state that, if a pedigree mutation is not identified, further testing of at-risk relatives should be suspended because the gene test will not be conclusive: a negative result could be a false negative because testing is not capable of detecting a mutation even if present. When an affected family member is not available for evaluation, starting the test process with at-risk family members can provide only positive or inconclusive results. In this circumstance, a true negative test result for an at-risk individual can only be obtained if another at-risk family member tests positive for a mutation. MYH is a DNA repair gene that corrects DNA base pair mismatch errors in the genetic code before replication. Mutation of the MYH gene may result in colon cancer. In this regard, the MYH gene has been found to be significantly involved in colon cancer, both in cases where there is a clear family history of the disease, as well as in cases without any sign of a hereditary cause. The National Comprehensive Cancer Network (NCCN)s practice guidelines on colorectal cancer screening (2006) recommended testing for MYH mutations for individuals with personal history of adenomatous polyposis (more than 10 adenomas, or more than 15 cumulative adenomas in 10 years) either consistent with recessive inheritance or with adenomatous polyposis with negative adenomatous polyposis coli (APC) mutation testing. The guideline noted that when polyposis is present in a single person with negative family history, de novo APC mutation should be tested if negative, testing for MYH should follow. When family history is positive only for a sibling, recessive inheritance should be considered and MYH testing should be done first. In a polyposis family with clear autosomal dominant inheritance, and absence of APC mutation, MYH testing is unlikely to be informative. Members in such family are treated according to the polyposis phenotype, including classical or attenuated FAP. Thrombophilia is a disorder of blood coagulation that increases the risk for blood clots (thrombosis) in veins or arteries. Thrombophilia can be acquired or inherited. The most common acquired thrombophilias occur as a result of injury, surgery or a medical condition. The most common hereditary thrombophilias are factor V leiden (FVL), due to a mutation in the F5 gene and prothrombin G20210A, as a result of a mutation in the F2 gene. Factor V Leiden mutation is the most common hereditary blood coagulation disorder in the United States. It is present in 5 of the Caucasian population and 1.2 of the African-American population. Factor V Leiden increases the risk of venous thrombosis 3 to 8 fold for heterozygous individuals and 30 to 140 fold for homozygous individuals. Factor V Leiden mutation has been associated with the following complications: Cerebrovascular accident and myocardial infarction Deep venous thrombosis Preeclampsia andor eclampsia Relatives of individuals with venous thrombosis under age 50 or Venous thrombosis and a strong family history of thrombotic disease or Venous thrombosis in pregnant women or women taking oral contraceptives or Venous thrombosis in unusual sites (such as hepatic, mesenteric, and cerebral veins). The ACMG does not recommend random screening of the general population for factor V Leiden. Routine testing is also not recommended for patients with a personal or family history of arterial thrombotic disorders (e. g. acute coronary syndromes or stroke) except for the special situation of myocardial infarction in young female smokers. According to the ACMG, testing may be worthwhile for young patients (less than 50 years of age) who develop acute arterial thrombosis in the absence of other risk factors for atherosclerotic arterial occlusive disease. The ACMG does not recommend prenatal testing or routine newborn screening for factor V Leiden mutation. The ACMG does not recommend general screening for factor V Leiden mutation before administration of oral contraceptives. The ACMG recommends targeted testing prior to oral contraceptive use in women with a personal or family history of venous thrombosis. Factor V Leiden screening of asymptomatic individuals with other recognized environmental risk factors, such as surgery, trauma, paralysis, and malignancy is not necessary or recommended by the ACMG, since all such individuals should receive appropriate medical prophylaxis for thrombosis regardless of carrier status. When Factor V Leiden testing is indicated, the ACMG recommends either direct DNA-based genotyping or factor V Leiden-specific functional assay (e. g. activated protein C (APC) resistance). Patients who test positive by a functional assay should then be further studied with the DNA test for confirmation and to distinguish heterozygotes from homozygotes. According to the ACMG, patients testing positive for factor V Leiden or APC resistance should be considered for molecular genetic testing for prothrombin 20210A, the most common thrombophilia with overlapping phenotype for which testing is easily and readily available. The prothrombin 20210A mutation is the second most common inherited clotting abnormality, occurring in 2 of the general population. It is only a mild risk factor for thrombosis, but may potentiate other risk factors (such as Factor V Leiden, oral contraceptives, surgery, trauma, etc.). A factor V gene haplotype (HR2) defined by the R2 polymorphism (A4070G) may confer mild APC resistance and interact with the factor V Leiden mutation to produce a more severe APC resistance phenotype (Bernardi et al, 1997 de Visser et al, 2000 Mingozzi et al, 2003). In one study, co-inheritance of the HR2 haplotype increased the risk of venous thromboembolism associated with factor V Leiden by approximately 3-fold (Faioni et al, 1999). However, double heterozygosity for factor V Leiden and the R2 polymorphism was not associated with a significantly higher risk of early or late pregnancy loss than a heterozygous factor V Leiden mutation alone (Zammiti et al, 2006). Whether the HR2 haplotype alone is an independent thrombotic risk factor is still unclear. Several studies have suggested that the HR2 haplotype is associated with a 2-fold increase in risk of venous thromboembolism (Alhenc-Gelas et al, 1999 Jadaon and Dashti, 2005). In contrast, other studies (de Visser 2000 Luddington et al, 2000 Dindagur et al, 2006) found no significant increase in thrombotic risk (GeneTests, University of Washington, Seattle, 2007). Plasminogen activator inhibitor-1 (PAI-1) is an inhibitor of fibrinolysis, the clot dissolving portion of the coagulation process. PAI-1 is under investigation as a risk factor for conditions such as cardiovascular disease, thrombophilia and pregnancy-related complications. The PAI-1 test is an antibody-based enzyme assay. CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy) is a rare, genetically inherited, congenital vascular disease of the brain that causes strokes, subcortical dementia, migraine-like headaches, and psychiatric disturbances. CADASIL is very debilitating and symptoms usually surface around the age of 45. Although CADASIL can be treated with surgery to repair the defective blood vessels, patients often die by the age of 65. The exact incidence of CADASIL in the United States is unknown. DNA testing for CADASIL is appropriate for symptomatic patients who have a family history consistent with an autosomal dominant pattern of inheritance of this condition. Clinical signs and symptoms of CADASIL include stroke, cognitive defects andor dementia, migraine, and psychiatric disturbances. DNA testing is also indicated for pre-symptomatic patients where there is a family history consistent with an autosomal dominant pattern of inheritance and there is a known mutation in an affected member of the family. This policy is consistent with guidelines on CADASIL genetic testing from the European Federation of Neurological Societies. Cystic fibrosis is the most common potentially fatal autosomal recessive disease in the United States. CF is characterized by the production of abnormally viscous mucous produced by the affected glands and usually causes respiratory infections and impaired pancreatic functions. CF produces chronic progressive disease of the respiratory system, malabsorption due to pancreatic insufficiency, increased loss of sodium and chloride in sweat, and male infertility as a consequence of atresia of the vas deferens. Pulmonary disease is the most common cause of mortality and morbidity in individuals with CF. The incidence of this disease ranges from 1:500 in Amish (Ohio) to 1:90,000 in Hawaiian Orientals, and is estimated to be 1:2,500 newborns of European ancestry. It occurs less frequently in people with other ethnic and racial backgrounds. About 1:25 persons of European ancestry is a carrier (or heterozygote), possessing one normal and one abnormal CF gene. Because of recent advances in clinical management of CF, babies born today are expected to live well into middle age. Currently, the most frequently employed test for CF is the quantitative pilocarpine iontophoresis sweat test. Sweat chloride is more reliable than sweat sodium for diagnostic purposes with a sensitivity of 98 and a specificity of 83 . However, this test can not detect CF carriers because the electrolyte content of sweat is normal in heterozygotes (Wallach, 1991). Genetic testing is used to diagnose CF in individuals with signs and symptoms of the disease. It is also used for carrier screening of potential parents to identify genetic mutations for which they are at risk of passing along to their children. Carriers may be unaffected but are at risk for producing children who are affected. Preferably carrier screening takes place before pregnancy, but can take place during the early stages of pregnancy. The gene for CF (cystic fibrosis trans-membrane conductance regulator, CFTR) was cloned, and the principal mutant gene in white people (DF508) was characterized in 1989. This mutation is due to a 3-base-pair deletion that results in the loss of a phenylalanine at position 508 from the 1,480-amino acid coding region (Riordan et al, 1989). This mutation is found in approximately 70 of carriers of European ancestry, but the relative frequency varies from 30 in Ashkenazi Jews to 88 in Danes (Cutting et al, 1992). Available evidence indicates that CFTR functions as a chloride channel, although it may also serve other functions. Since then, more than 200 CF mutations have been described. Five of the most common mutations (DF508, G542X, F551D, R553X, N1303K) constitute approximately 85 of the alleles in the United States (Elias et al, 1991). Thus, screening procedures that test for these 5 mutations will detect approximately 85 of CF carriers. The genetic screening test for CF is usually based on mouthwash samples collected by agitating sucrose or saline in the mouth. The DNA of these cells are amplified, digested, and subjected to separation techniques that identify 3 to 5 common mutations. A National Institutes of Health consensus panel (1997) recommended that genetic testing for CF should be offered to adults with a positive family history of CF, to partners of people with the disease, to couples currently planning a pregnancy, and to couples seeking prenatal testing. However, the panel did not recommend genetic testing of CF to the general public or to newborn infants. The American College of Obstetricians and Gynecologists (2001) has issued similar recommendations on genetic carrier testing for CF. ACOG recommends that obstetricians should offer CF screening to: Couples in whom one or both members are white and who are planning a pregnancy or seeking prenatal care Individuals with a family history of CF and Reproductive partners of people who have CF. ACOG also recommends that screening should be made available to couples in other racial and ethnic groups. To date, over 900 mutations in the CF gene have been identified. As it is impractical to test for every known mutation, the ACMG Accreditation of Genetic Services Committee has compiled a standard screening panel of 25 CF mutations, which represents the standard panel that ACMG recommends for screening in the U. S. population (Grody et al, 2001). This 25-mutation panel incorporates all CF-causing mutations with an allele frequency of greater than or equal to 0.1 in the general U. S. population, including mutation subsets shown to be sufficiently predominant in certain ethnic groups, such as Ashkenazi Jews and African Americans. This standard panel of mutations is intended to provide the greatest pan-ethnic detectability that can practically be performed. The ACOGs update on carrier screening for CF (2011) provided the following recommendations. If a patient has been screened previously, CF screening results should be documented but the test should not be repeated. Complete analysis of the CFTR gene by DNA sequencing is not appropriate for routine carrier screening. Fragile X syndrome Fragile X syndrome is the most common cause of inherited mental retardation, seen in approximately one in 1,200 males and one in 2,500 females. Phenotypic abnormalities associated with Fragile X syndrome include mental retardation, autistic behaviors, characteristic narrow face with large jaw, and speech and language disorders. Fragile X syndrome was originally thought to be transmitted in an X-linked recessive manner however, the inheritance pattern of fragile X syndrome has been shown to be much more complex. Standard chromosomal analysis does not consistently demonstrate the cytogenetic abnormality in patients with fragile X syndrome, and molecular diagnostic techniques (DNA testing) have become the diagnostic procedure of choice for fragile X syndrome. Aetnas policy on coverage of fragile X genetic testing is based on guidelines fromm the ACMG (1994) and the ACOG (1995). Lactase-phlorizin hydrolase, which hydrolyzes lactose, the major carbohydrate in milk, plays a critical role in the nutrition of the mammalian neonate (Montgomery et al, 1991). Lactose intolerance in adult humans is common, usually due to low levels of small intestinal lactase. Low lactase levels result from either intestinal injury or (in the majority of the worlds adult population) alterations in the genetic expression of lactase. Although the mechanism of decreased lactase levels has been the subject of intensive investigation, no consensus has yet emerged. The LactoTYPE Test (Prometheus Laboratories) is a blood test that is intended to identify patients with genetic-based lactose intolerance. According to the manufacturer, this test provides a more definitive diagnosis and scientific explanation for patients with persistent symptoms. There is insufficient evidence that the assessment of the genetic etiology of lactose intolerance would affect the management of patients such that clinical outcomes are improved. Current guidelines on the management of lactose intolerance do not indicate that genetic testing is indicated (NHS, 2005 National Public Health Service for Wales, 2005). Long QT syndrome Long QT Syndrome (LQTS) is a disorder of the hearts electrical system that predisposes individuals to irregular heartbeats, fainting spells and sudden death. The irregular heartbeats are typically brought on by stress or vigorous activity. There are multiple genetic forms of LQTS including Anderson-Tawil syndrome, Jervell and Lange-Nielsen syndrome, Romano-Ward syndrome and Timothy syndrome. Voltage-gated sodium channels are transmembrane proteins that produce the ionic current responsible for the rising phase of the cardiac action potential and play an important role in the initiation, propagation, and maintenance of normal cardiac rhythm. Inherited mutations in the sodium channel alpha-subunit gene (SCN5A), the gene encoding the pore-forming subunit of the cardiac sodium channel, have been associated with distinct cardiac rhythm syndromes such as the congenital long QT3 syndrome (LQT3), Brugada syndrome, isolated conduction disease, sudden unexpected nocturnal death syndrome (SUNDS), and sudden infant death syndrome (SIDS). Electrophysiological characterization of heterologously expressed mutant sodium channels have revealed gating defects that, in many cases, can explain the distinct phenotype associated with the rhythm disorder. The long QT syndrome (LQTS) is a familial disease characterized by an abnormally prolonged QT interval and, usually, by stress-mediated life-threatening ventricular arrhythmias (Priori et al, 2001). Characteristically, the first clinical manifestations of LQTS tend to appear during childhood or in teenagers. Two variants of LQTS have been described: a rare recessive form with congenital deafness (Jervell and Lange-Nielsen syndrome, J-LN), and a more frequent autosomal dominant form (Romano-Ward syndrome, RW). Five genes encoding subunits of cardiac ion channels have been associated to LQTS and genotype-phenotype correlation has been identified. Of the 5 genetic variants of LQTS currently identified, LQT1 and LQT2 subtypes involve 2 genes, KCNQ1 and HERG, which encode major potassium currents. LQT3 involves SCN5A, the gene encoding the cardiac sodium current. LQT5 and LQT6 are rare subtypes also involving the major potassium currents. The principal diagnostic and phenotypic hallmark of LQTS is abnormal prolongation of ventricular repolarization, measured as lengthening of the QT interval on the 12-lead ECG (Maron et al, 1998). This is usually most easily identified in lead II or V1, V3, or V5, but all 12 leads should be examined and the longest QT interval used care should also be taken to exclude the U wave from the QT measurement. LQT3 appears to be the most malignant variant and may be the one less effectively managed by beta blockers. LQT1 and LQT2 have a higher frequency of syncopal events but their lethality is lower and the protection afforded by beta-blockers, particularly in LQT1, is much higher. The Jervell and Lange-Nielsen recessive variant is associated with very early clinical manifestations and a poorer prognosis than the Romano-Ward autosomal dominant form. The presence of syndactyly seems to represent a different genetic variant of LQTS also associated with a poor prognosis. Guidelines on sudden cardiac death from the European College of Cardiology (Priori et al, 2001) state that identification of specific genetic variants of LQTS are useful in risk stratification. The clinical variants presenting association of the cardiac phenotype with syndactyly or with deafness (Jervell and Lange-Nielsen syndrome) have a more severe prognosis. Genetic defects on the cardiac sodium channel gene (SCN5A) are also associated with higher risk of sudden cardiac death. In addition, identification of specific genetic variants may help in suggesting behavioral changes likely to reduce risk. LQT1 patients are at very high risk during exercise, particularly swimming. LQT2 patients are quite sensitive to loud noises, especially when they are asleep or resting. Genetic testing for LQTS may be indicated in persons with close relatives that have a defined mutation. Genetic testing may also be indicated in individuals with a prolonged QT interval on resting electrocardiogram (a corrected QT interval (QTc) of 470 msec or more in males and 480 msec or more in females) without an identifiable external cause for QTc prolongation. Common external causes of QTc prolongation are listed in the table below. Table. Common External Causes of Prolongation of QTc Interval Genetic testing for long QT syndrome has not been evaluated in patients who present with a borderline QT interval, suspicious symptoms (e. g. syncope), and no relevant family history (Roden, 2008). In these patients, the incidence of false positive and false negative results and their implications for management remain unknown. Genetic testing may also be necessary in person with long QT syndrome in sudden death close relatives. Brugada syndrome is an inherited condition comprising a specific EKG abnormality and an associated risk of ventricular fibrillation and sudden death in the setting of a structurally normal heart. Brugada syndrome is characterized by ST-segment abnormalities on EKG and a high risk of ventricular arrhythmias and sudden death. Brugada syndrome presents primarily during adulthood but age at diagnosis ranges from 2 days to 85 years. Clinical presentations may also include sudden infant death syndrome and sudden unexpected nocturnal death syndrome, a typical presentation in individuals from Southeast Asia. Brugada et al (2005) reported that Brugada syndrome and LQTS are both due to mutations in genes encoding ion channels and that the genetic abnormalities causing Brugada syndrome have been linked to mutations in the ion channel gene SCN5A. Brugada stated that the syndrome has been identified only recently but an analysis of data from published studies indicates that the disease is responsible for 4 to 12 of unexpected sudden deaths, and up to 50 of all sudden death in patients with an apparently normal heart. Brugada explained that Brugada syndrome is a clinical diagnosis based on syncopal or sudden death episodes in patients with a structurally normal heart and a characteristic ECG pattern. The ECG shows ST segment elevation in the primordial leads V1-V3, with a morphology of the QRS complex resembling a right bundle branch block this pattern may also be caused by J point elevation. When ST elevation is the most prominent feature, the pattern is called coved-type. When the most prominent feature is J point elevation, without ST elevation the pattern is called saddle-type. Brugada pointed out that it is important to exclude other causes of ST segment elevation before making the diagnosis of Brugada syndrome. Brugada syndrome is inherited in an autonomic dominant manner with variable penetrance. Most individuals diagnosed with Brugada syndrome have an affected parent. The proportion of cases caused by de novo mutations is estimated at 1 . Each child of an individual with Brugada syndrome has a 50 chance of inheriting the mutation. According to Brugada, antiarrhythmic drugs do not prevent sudden death in symptomatic or asymptomatic individuals with Brugada syndrome and that implantation of an automatic cardioverter-defibrillator is the only currently proven effective therapy. To date the great majority of identified disease-causing mutations have been located in the SCN5A gene encoding the a subunit of the human cardiac voltage-gated sodium channel but such mutations can be identified in, at most, 30 of affected people. Moreover, a positive genetic test adds little or nothing to the clinical management of such a person (HRUK, 2007). The identification of an SCN5A mutation does, of course, allow screening of family members but the usefulness of genetic screening may be less than for other familial syndromes, however, given that the routine 12-lead EKG (with or without provocative drug testing) appears to be a relatively effective method of screening for the condition. Hypertrophic cardiomyopathy (HCM) is a disease of the myocardium in which a portion of the myocardium is hypertrophied without any obvious cause it is among the most common genetically transmitted cardiovascular diseases. In HCM, the heart muscle is so strong that it does not relax enough to fill with the heart with blood and therefore has reduced pumping ability. The genetic abnormalities that cause HCM are heterogeneous. Hypertrophic cardiomyopathy is most commonly due to a mutation in one of 9 genes that results in a mutated protein in the sarcomere. Some of the genes responsible for HCM have not yet been identified, and among those genes that have been identified, the spectrum of possible disease-causing mutations is incomplete. As a result, a thorough evaluation of known genes requires extensive DNA sequencing, which is onerous for routine clinical testing. Less rigorous methods (such as selective sequencing) reduces the likelihood of identifying the responsible mutation. Population studies have demonstrated that some patients are compound heterozygotes (inheriting 2 different mutations within a single HCM gene), double heterozygotes (inheriting mutations in 2 HCM genes), or homozygotes (inheriting the same mutation from both parents). To be certain of detecting such genotypes, sequencing of candidate genes would need to continue in a given patient even after a single mutation was identified. In many persons with HCM mutations, the disease can be mild and the symptoms absent or minimal. In addition, phenotypic expression of HCM can be influenced by factors other than the basic genetic defect, and the clinical consequences of the genetic defect can vary. There is sufficient heterogeneity in the clinical manifestations of a given gene mutation that, even when a patients mutation is known, his or her clinical course can not be predicted with any degree of certainty. In addition, the prognostic impact of a given mutation may relate to a particular family and not to the population at large. Many families have their own private mutations and thus knowledge of the gene abnormalities can not be linked to experience from other families. Family members with echocardiography evidence of HCM should be managed like other patients with HCM. In general, genetically affected but phenotypically normal family members should not be subjected to the same activity restriction as patients with HCM. Bos and colleagues (2009) stated that over the past 20 years, the pathogenic basis for HCM, the most common heritable cardiovascular disease, has been studied extensively. Affecting about 1 in 500 persons, HCM is the most common cause of sudden cardiac death (SCD) among young athletes. In recent years, genomic medicine has been moving from the bench to the bedside throughout all medical disciplines including cardiology. Now, genomic medicine has entered clinical practice as it pertains to the evaluation and management of patients with HCM. The continuous research and discoveries of new HCM susceptibility genes, the growing amount of data from genotype-phenotype correlation studies, and the introduction of commercially available genetic tests for HCM make it essential that cardiologists understand the diagnostic, prognostic, and therapeutic implications of HCM genetic testing. Hudecova et al (2009) noted that the clinical symptoms of HCM are partly dependent on mutations in affected sarcomere genes. Different mutations in the same gene can present as malign with a high-risk of SCD, while other mutations can be benign. The clinical symptomatology can also be influenced by other factors such as the presence of polymorphisms in other genes. Currently, the objective of intensive clinical research is to access the contribution of molecular genetic methods in HCM diagnostics as well as in risk stratification of SCD. It is expected that genetic analyses will have an important consequence in the screening of the relatives of HCM patients and also in the prenatal diagnostics and genetic counseling. Shephard and Semsarian (2009) stated that genetic heart disorders are an important cause of SCD in the young. While pharmacotherapies have made some impact on the prevention of SCD, the introduction of implantable cardioverter-defibrillator (ICD) therapy has been the single major advance in the prevention of SCD in the young. In addition, the awareness that most causes of SCD in the young are inherited, means family screening of relatives of young SCD victims allows identification of previously unrecognised at-risk individuals, thereby enabling prevention of SCD in relatives. The role of genetic testing, both in living affected individuals as well as in the setting of a molecular autopsy, is emerging as a key factor in early diagnosis of an underlying cardiovascular genetic disorder. The Heart Failure Society of Americas practice guideline on Genetic evaluation of cardiomyopathy (Hershberger et al, 2009) stated that genetic testing is primarily indicated for risk assessment in at-risk relatives who have little or no clinical evidence of cardiovascular disease. Genetic testing for HCM should be considered for the one most clearly affected person in a family to facilitate family screening and management. Specific genes available for testing for HCM include MYH7, MYBPC3, TNNT2, TNN13, TPM1, ACTC, MYL2, and MYL3. MYH7 and MYBPC each accounts for 30 to 40 of mutations TNNT2 for 10 to 20 . Genetic cause can be identified in 35 to 45 overall up to 60 to 65 when the family history is positive. The BlueCross BlueShield Association Technology Evaluation Center (TEC)s assessement on genetic testing for predisposition to inherited HCM (2010) concluded that the use of genetic testing for inherited HCM meets the TEC criteria for individuals who are at-risk for development of HCM, defined as having a close relative with established HCM, when there is a known pathogenic gene mutation present in an affected relative. In order to inform and direct genetic testing for at-risk individuals, genetic testing should be initially performed in at least 1 close relative with definite HCM (index case) if possible. This testing is intended to document whether a known pathologic mutation is present in the family, and optimize the predictive value of predisposition testing for at-risk relatives. Due to the complexity of genetic testing for HCM and the potential for misinterpretation of results, the decision to test and the interpretation of test results should be performed by, or in consultation with an expert in the area of medical genetics andor HCM. The TEC assessment also concluded that genetic testing for inherited HCM does not meet the TEC criteria for predisposition testing in individuals who are at-risk for development of HCM, defined as having a close relative with established HCM, when there is no known pathogenic gene mutation present in an affected relative. This includes: Patients with a family history of HCM, with unknown genetic status of affected relatives and Patients with a family history of HCM, when a pathogenic mutation has not been identified in affected relatives. Arrhythmogenic right ventricular dysplasiacardiomyopathy (ARVDC) Arrhythmogenic right ventricular dysplasiacardiomyopathy is a condition characterized by progressive fibro-fatty replacement of the myocardium that predisposes individuals to ventricular tachycardia and sudden death. The prevalence of ARVDC is estimated to be 1 case per 10,000 population. Familial occurrence with an autosomal dominant pattern of inheritance and variable penetrance has been demonstrated. Recessive variants have been reported. It is estimated that half of the individuals have a family history of ARVDC and the remaining cases are new mutations. Genetic testing has not been demonstrated to be necessary to establish the diagnosis of ARVDC or determine its prognosis. Twelve-lead ECG and echocardiography can be used to identify affected relatives. The genetic abnormalities that cause ARVDC are heterogeneous. The genes frequently associated with ARVDC are PKP2 (plakophilin-2), DSG2 (desmoglein-2), and DSP (desmoplakin). A significant proportion of ARVDC cases have been reported with no linkage to known chromosomal loci in one report, 50 of families undergoing clinical and genetic screening did not show linkage with any known genetic loci (Corrado et al, 2000). Most affected individuals live a normal lifestyle. Management of individuals with ARVDC is complicated by incomplete information on the natural history of the disease and the variability of disease expression even within families. High-risk individuals with signs and symptoms of ARVDC are treated with anti-arrhythmic medications and those at highest risk who have been resuscitated or who are unresponsive to or intolerant of anti-arrhythmic therapy may be considered for an ICD. According to the Heart Failure Society of Americas Practice Guideline on the genetic evaluation of cardiomyopathy (2009), the clinical utility for all genetic testing of cardiomyopathies remains to be defined. The guideline stated, because the genetic knowledge base of cardiomyopathy is still emerging, practitioners caring for patients and families with genetic cardiomyopathy are encouraged to consider research participation. The Multidisciplinary Study of Right Ventricular Dysplasia (North American registry) is a 5-year study funded by the National Institutes of Health to determine how the genes responsible for ARVDC affect the onset, course, and severity of the disease. Enrollment in the study was completed in May 2008 and the study is currently in the follow-up period. Catecholaminergic polymorphic ventricular tachycardia (CPVT) Catecholaminergic polymorphic ventricular tachycardia (CPVT) is a highly lethal form of inherited arrhythmogenic disease characterized by adrenergically mediated polymorphic ventricular tachycardia (Liu et al, 2007). Mutations in the cardiac ryanodine receptor (RyR2) gene and the cardiac calsequestrin (CASQ2) gene are responsible for the autosomal dominant and recessive variants of CPVT, respectively. The clinical presentation encompasses exercise - or emotion-induced syncopal events and a distinctive pattern of reproducible, stress-related, bi-directional ventricular tachycardia in the absence of both structural heart disease and a prolonged QT interval. CPVT typically begins in childhood or adolescence. The mortality rate in untreated individuals is 30 to 50 by age 40 years. Clinical evaluation by exercise stress testing and Holter monitoring and genetic screening can facilitate early diagnosis. Beta-blockers are the most effective drugs for controlling arrhythmias in CPVT patients, yet about 30 of patients with CPVT still experience cardiac arrhythmias on beta-blockers and eventually require an implantable cardioverter defibrillator. Liu et al (2008) stated that molecular genetic screening of the genes encoding the cardiac RyR2 and CASQ2 is critical to confirm uncertain diagnosis of CPVT. Katz et al (2009) noted that CPVT is a primary electrical myocardial disease characterized by exercise - and stress-related ventricular tachycardia manifested as syncope and sudden death. The disease has a heterogeneous genetic basis, with mutations in the cardiac RyR2 gene accounting for an autosomal-dominant form (CPVT1) in approximately 50 and mutations in the cardiac CASQ2 gene accounting for an autosomal-recessive form (CPVT2) in up to 2 of CPVT cases. Both RyR2 and calsequestrin are important participants in the cardiac cellular calcium homeostasis. These researchers reviewed the physiology of the cardiac calcium homeostasis, including the cardiac excitation contraction coupling and myocyte calcium cycling. Although the clinical presentation of CPVT is similar in many respects to the LQTS, there are important differences that are relevant to genetic testing. CPVT appears to be a more malignant condition, as many people are asymptomatic before the index lethal event and the majority of cardiac events occur before 20 years of age. Affected people are advised to avoid exercise-related triggers and start prophylactic beta-blockers with dose titration guided by treadmill testing. Genetic testing has been recommended in individuals with clinical features considered typical of CPVT following expert clinical assessment (HRUK, 2008). Clinically the condition is difficult to diagnose in asymptomatic family members as the ECG and echocardiogram are completely normal at rest. Exercise stress testing has been advised in family members in order to identify exercise-induced ventricular arrhythmias, but the sensitivity of this clinical test is unknown. Although the diagnostic yield from genetic testing is less than that for the LQTS (about 50 ) in patients with typical clinical features, a positive genetic test may be of value for the individual patient (given the prognostic implications) and for screening family members (given the difficulties in clinical screening methods) (HRUK, 2008). The RyR2 gene is large and a lsquolsquotargetedrsquorsquo approach is usually undertaken, in which only exons that have been previously implicated are examined. The 2006 guidelines from the American College of Cardiology on management of patients with ventricular arrhythmias and the prevention of sudden cardiac death (Zipes et al, 2006) included the following recommendations for patients with CPVT: There is evidence andor general agreement supporting the use of beta blockers for patients clinically diagnosed on the basis of spontaneous of documented stress-induced ventricular arrhythmias. There is evidence andor general agreement supporting the use of an implantable ICD in combination with beta blockers for survivors of cardiac arrest who have a reasonable expectation of survival with a good functional capacity for more than 1 year. The weight of evidence andor opinion supports the use of beta blockers in patients without clinical manifestations who are diagnosed in childhood based upon genetic analysis. The weight of evidence andor opinion supports the use of an ICD in combination with beta blockers for patients with a history of syncope andor sustained ventricular tachycardia while receiving beta blockers who have a reasonable expectation of survival with a good functional capacity for more than 1 year. The usefulness andor efficacy of beta blockers is less well established in patients without clinical evidence of arrhythmias who are diagnosed in adulthood based upon genetic analysis. Hemochromatosis, a condition involving excess accumulation of iron, can lead to iron overload, which in turn can result in complications such as cirrhosis, diabetes, cardiomyopathy, and arthritis (Burke 1992 Hanson et al, 2001). Hereditary hemochromatosis (HHC) is characterized by inappropriately increased iron absorption from the duodenum and upper intestine, with consequent deposition in various parenchymal organs, notably the liver, pancreas, joints, heart, pituitary gland and skin, with resultant end-organ damage (Limdi and Crampton, 2004). Clinical features may be non-specific and include lethargy and malaise, or reflect target organ damage and present with abnormal liver tests, cirrhosis, diabetes mellitus, arthropathy, cardiomyopathy, skin pigmentation and gonadal failure. Early recognition and treatment (phlebotomy) is essential to prevent irreversible complications such as cirrhosis and hepatocellular carcinoma. HHC is an autosomal recessive condition associated with mutations of the HFE gene. Two of the 37 allelic variants of the HFE gene, C282Y and H63D, are significantly correlated with HHC. C282Y is the more severe mutation, and homozygosity for the C282Y genotype accounts for the majority of clinically penetrant cases. Hanson et al (2001) reported that homozygosity for the C282Y mutation has been found in 52 to 100 of previous studies on clinically diagnosed index cases. Five percent of HHC probands were found by Hanson et al to be compound heterozygotes (C282YH63D), and 1.5 were homozygous for the H63D mutation 3.6 were C282Y heterozygotes, and 5.2 were H63D heterozygotes. In 7 of cases, C282Y and H63D mutations were not present. In the general population, the frequency of the C282YC282Y genotype is 0.4 . HHC is a very common genetic defect in the Caucasian population. C282Y heterozygosity ranges from 9.2 in Europeans to nil in Asian, Indian subcontinent, African, Middle Eastern, Australian and Asian populations (Hanson et al, 2001). The H63D carrier frequency is 22 in European populations. Accurate data on the penetrance of the different HFE genotypes are not available. But current data suggest that clinical disease does not develop in a substantial proportion of people with this genotype. Available data suggest that up to 38 to 50 of C282Y homozygotes may develop iron overload, with up to 10 to 33 eventually developing hemochromatosis-associated morbidity (Whitlock et al, 2006). A pooled analysis found that patients with the HFE genotypes C282YH63D and H63DH63D are also at increased risk for iron overload, yet overall, disease is likely to develop in fewer than 1 of people with these genotypes (Burke, 1992). Thus, DNA-based tests for hemochromatosis identify a genetic risk rather than the disease itself. Environmental factors such as diet and exposure to alcohol or other hepatotoxins may modify the clinical outcome in patients with hemochromatosis, and variations in other genes affecting iron metabolism may also be a factor. As a result, the clinical condition of iron overload is most reliably diagnosed on the basis of biochemical evidence of excess body iron (Burke, 1992). Whether it is beneficial to screen asymptomatic people for a genetic risk of iron overload is a matter of debate. To date, population screening for HHC is not recommended because of uncertainties about optimal screening strategies, optimal care for susceptible persons, laboratory standardization, and the potential for stigmatization or discrimination (Hanson et al, 2001 Whitlock et al, 2006). A systematic evidence review prepared for the U. S. Preventive Services Task Force concluded: Research addressing genetic screening for hereditary hemochromatosis remains insufficient to confidently project the impact of, or estimate the benefit from, widespread or high-risk genetic screening for hereditary hemochromatosis (Whitlock et al, 2006). Familial nephrotic syndrome (NPHS1, NPHS2) Nephrotic syndrome comes in 2 variants: (i) those sensitive to treatment with immunosuppressants (steroid-sensitive), and (ii) those resistant to immunosuppressants (steroid-resistant). Familial forms of nephrotic syndrome are steroid resistant (Niaudet, 2007). Mutations in two genes, NPHS1 and NPHS2, have been associated with a familial nephrotic syndrome. Mutations in the gene for podocin, called NPHS2, also known as familial focal glomerulosclerosis, are observed in patients with both familial and sporadic steroid-resistant nephrotic syndrome (SRNS). Identifying children with nephrotic syndrome due to NPHS2 mutations can avoid unnecessary exposure to immunosuppressive therapy, because immunosuppressive therapy has not been shown to be effective in treating these children (Niaudet, 2007). Thus, authorities have recommended testing for such mutations in those with a familial history of steroid resistant nephrotic syndrome and children with steroid-resistant disease . Some have suggested that, to avoid unnecessary exposure to steroid therapy, all children with a first episode of the nephrotic syndrome should be screened for NPHS2 mutations (Niaudet, 2007). However, given that over 85 of children with idiopathic nephrotic syndrome are steroid-sensitive and only approximately 20 of steroid-resistant patients have NPHS2 mutations, screening for abnormalities at this genetic locus would identify less than 5 of all cases. However, screening a child with a first episode of the nephrotic syndrome with a familial history of steroid-resistant nephrotic syndrome has been recommended because they are at increased risk for having a NPHS2 gene mutation. Mutations in the gene for nephrin, called NPHS1, cause the congenital nephrotic syndrome of Finnish type (CNF) (Niaudet, 2007). CNF is inherited as an autosomal recessive trait, with both sexes being involved equally. There are no manifestations of the disease in heterozygous individuals. Most infants with the CNF are born prematurely (35 to 38 weeks), with a low birth weight for gestational age. Edema is present at birth or appears during the first week of life in 50 of cases. Severe nephrotic syndrome with marked ascites is always present by 3 months. End-stage renal failure usually occurs between 3 and 8 years of age. Prolonged survival is possible with aggressive supportive treatment, including dialysis and renal transplantation. The nephrotic syndrome in CNF is always resistant to corticosteroids and immunosuppressive drugs, since this is not an immunologic disease (Niaudet, 2007). Furthermore these drugs may be harmful due to affected individuals already high susceptibility to infection. The CNF becomes manifest during early fetal life, beginning at the gestation age of 15 to 16 weeks. The initial symptom is fetal proteinuria, which leads to a more than 10-fold increase in the amniotic fluid alpha-fetoprotein (AFP) concentration (Niaudet, 2007). A parallel, but less important increase in the maternal plasma AFP level is observed. These changes are not specific, but they may permit the antenatal diagnosis of CNF in high risk families in which termination of the pregnancy might be considered. However, false positive results do occur, often leading to abortion of healthy fetuses. Genetic linkage and haplotype analyses may diminish the risk of false positive results in informative families (Niaudet, 2007). The 4 major haplotypes, which cover 90 of the CNF alleles in Finland, have been identified, resulting in a test with up to 95 accuracy. Authorities do not recommend screening for NPHS1 mutations for all children with the first episode of nephrotic syndrome, for the reasons noted above regarding NPHS2 mutation screening. However, genetic testing may be indicated for infants with congenital nephrotic syndrome (i. e. appearing within the first months of life) who are of Finnish descent andor who have a family history that suggests a familial cause of congenital nephrotic syndrome. The primary purpose of this testing is for pregnancy planning. Detection of an NPHS1 mutation also has therapeutic implications, as such nephrotic syndrome is steroid resistant. Primary dystonia (DYT-1) Dystonia consists of repetitive, patterned, twisting, and sustained movements that may be either slow or rapid. Dystonic states are classified as primary, secondary, or psychogenic depending upon the cause (Jankovic, 2007). By definition, primary dystonia is associated with no other neurologic impairment, such as intellectual, pyramidal, cerebellar, or sensory deficits. Cerebral palsy is the most common cause of secondary dystonia. Primary dystonia may be sporadic or inherited (Jankovic, 2007). Cases with onset in childhood usually are inherited in an autosomal dominant pattern. Many patients with hereditary dystonia have a mutation in the TOR1A (DYT1) gene that encodes the protein torsinA, an ATP-binding protein in the 9q34 locus. The role of torsinA in the pathogenesis of primary dystonia is unknown. DNA testing for the abnormal TOR1A gene can be performed on individuals with dystonia. The purpose of such testing is to help rule out secondary or psychogenic causes of dystonia, and for family planning purposes. An estimated 8 to 12 of persons with melanoma have a family history of the disease, but not all of these individuals have hereditary melanoma (Tsao and Haluska, 2007). In some cases, the apparent familial inheritance pattern may be due to clustering of sporadic cases in families with common heavy sun exposure and susceptible skin type. Approximately 10 percent of melanomas are familial.62 CDKN2Ap16 (also known as MTS1, INK4, MLM, p16INK4A) (eg, MELARIS) is the major gene associated with melanoma. A subset of CDKN2A mutations carrier families also displays an increased risk of pancreatic cancer however, at this time, detecting a CDKN2A mutation does not affect the clinical management of an affected patient or at-risk family members. Other genes commonly associated with hereditary pancreatic cancer include BRCA2 and PALB2 (eg, PANEXIA). Regardless of the results of genetic testing, close dermatologic surveillance is recommended for individuals at risk for familial melanoma based due to family history, and the efficacy of screening for pancreatic cancer is uncertain. A melanoma susceptibility locus has been identified on chromosome 9p21 this has been designated CDKN2A (also known as MTS1 (multiple tumor suppressor 1)) (Tsao and Haluska, 2007). There is a variable rate of CDKN2A mutations in patients with hereditary melanoma. The risk of CDKN2A mutation varies from approximately 10 for families with at least 2 relatives having melanoma, to more than 40 for families having multiple affected 1st degree relatives spanning several generations. Persons at increased risk of melanoma are managed with close clinical surveillance and education in risk-reduction behavior (e. g. sun avoidance, sunscreen use). It is unclear how CDKN2A genetic test information would alter clinical recommendations (Tsao and Haluska, 2007). The negative predictive value of a negative test for a CDKN2A mutation is also not established since many familial cases occur in the absence of CDKN2A mutations. It is estimated that the prevalence of CDKN2A mutation carriers is less than 1 in high incidence populations. Thus, no mutations will be identifiable in the majority of families presenting to clinical geneticists. The American Society of Clinical Oncology (ASCO) has issued a consensus report on the utility of genetic testing for cancer susceptibility (ASCO, 1996), and recommendations for the process of genetic testing were updated in 2003 (ASCO, 2003). The report notes that the sensitivity and specificity of the commercially available test for CDKN2A mutations are not fully known. Because of the difficulties with interpretation of the genetic tests, and because test results do not alter patient or family member management, ASCO recommends that CDKN2A testing be performed only in the context of a clinical trial. The Scottish Intercollegiate Guidelines Network (SIGN, 2003) protocols on management of cutaneous melanoma reached similar conclusions, stating that genetic testing in familial or sporadic melanoma is not appropriate in a routine clinical setting and should only be undertaken in the context of appropriate research studies. The Melanoma Genetics Consortium recommends that genetic testing for melanoma susceptibility should not be offered outside of a research setting (Kefford et al, 2002). They state that ldquountil further data become available, however, clinical evaluation of risk remains the gold standard for preventing melanoma. First-degree relatives of individuals at high risk should be engaged in the same programmes of melanoma prevention and surveillance irrespective of the results of any genetic testing. rdquo Charcot-Marie Tooth disease type 1A (PMP-22) Charcot Marie Tooth disease, also known as peroneal muscular atrophy, progressive neural muscular atrophy, as well as hereditary motor and sensory neuropathy, is 1 of the 3 major types of hereditary neuropathy. With an estimated prevalence of at least 1:2,500 (autosomal dominance), CMT is one of the most common genetic neuromuscular disorders affecting approximately 125,000 persons in the United States. This hereditary peripheral neuropathy is genetically and clinically heterogeneous. It is usually inherited in an autosomal dominant manner, and occasionally in an autosomal recessive manner. Sporadic as well as X-linked cases have also been reported. In the X-linked recessive patterns, only males develop the disease, although females who inherit the defective gene can pass the disease onto their sons. In the X-linked dominant pattern, an affected mother can pass on the disorder to both sons and daughters, while an affected father can only pass it onto his daughters. The clinical manifestations can vary greatly in severity and age of onset. The clinical features may be so mild that they may be undetectable by patients, their families and physicians. Charcot-Marie-Tooth disease is usually diagnosed by an extensive physical examination, assessing characteristic weakness in the foot, leg, and hand, as well as deformities and impaired function in walking and manual manipulation. The clinical diagnosis is then confirmed by electromyogram and nerve conduction velocity tests, and sometimes by biopsy of muscle and of sural cutaneous nerve. Since CMT is a hereditary disease, family history can also help to confirm the diagnosis. Based on studies of motor nerve conduction velocity, CMT can be further classified into 2 types: (i) CMT Type I -- slow conduction velocity (less than 40 meterssecond for the median nerve or less than 15 meterssecond for the peroneal nerve) which accounts for 70 of all CMT cases, and (ii) CMT Type II -- normal or near normal nerve conduction velocity with decreased amplitude which accounts for the remaining 30 of CMT cases. Charcot Marie Tooth Type I disease is a demyelinating neuropathy with hypertrophic changes in peripheral nerves, and has its onset usually during late childhood. On the other hand, CMT Type II is a non-demyelinating neuronal disorder without hypertrophic changes, and has its onset generally during adolescence. Both CMT Types I and II are characterized by a slow degeneration of peripheral nerves and roots, resulting in distal muscle atrophy commencing in the lower extremities, and affecting the upper extremities several years later. Symptoms include foot drop or clubfoot, paresthesia in legs, sloping gait, later weakness and atrophy of hands, then arms, absence or reduction of deep tendon reflexes, and occasionally mild sensory loss. Charcot Marie Tooth disease is not a fatal disorder. It does not shorten the normal life expectancy of patients, and it does not affect them mentally. As stated earlier, there is a wide range of variation in the clinical manifestations of CMT -- the degree of severity can vary considerably from patient to patient, even among affected family members within the same generation. The condition can range from having no problems to having major difficulties in ambulation in early adult life, however, the latter is unusual. Most patients are able to ambulate and have gainful employment until old age. Currently, there is no specific treatment for this disease. Management of the majority of patients with CMT disease consists of supportive care with emphasis on proper bracing, foot care, physical therapy and occupational counseling. For example, the legs and shoes can be fitted with light braces and springs, respectively, to overcome foot drop. If foot drop is severe and the disease has become stationary, the ankle can be stabilized by arthrodeses. The underlying genetic basis for CMT Type I has been characterized. A point mutation in the PMP22 gene which encodes a peripheral myelin protein with an apparent molecular weight of 22,000 or a DNA duplication of a specific region 5 megabases) including the PMP22 gene in the proximal short arm of chromosome 17 (band 17p11.2-p12) has been identified in 70 of clinically diagnosed patients --- CMT Type IA. Thus, patients with CMT Type IA represent approximately 50 of all CMT cases. Other CMT Type I patients (CMT Type IB) exhibit an abnormality (Duffy locus) in the proximal long arm of chromosome number 1 (band 1q21-22). Presently, no test is available for the dominant CMTIB gene on chromosome 1. On the other hand, a CMT Type IA DNA test is available commercially. The test is accomplished through a blood sample analysis -- DNAs are extracted from leukocytes of patients and pulsed-field gel electrophoresis is employed to isolate large segments of DNA encompassing CMTIA duplication-specific junction fragments which are then detected by hybridization with aCMTIA duplication-specific probe (CMTIA-REP). This probe identifies the homologous regions that flank the CMTIA duplication monomer unit. A positive CMTIA DNA test means the presence of a 500 kilobases CMTIA duplication specific junction fragment, and is diagnostic for CMT Type IA. A negative CMT Type IA means the absence of the CMTIA duplication specific junction fragment, and does not rule out a diagnosis of CMT disease. This is because patients with CMT Type IA represent approximately 50 of all CMT cases. The value of this molecular test in family planning is questionable because of its relatively low detection rate and its inability to predict the severity of the disease. Moreover, it is likely that there are undiscovered CMTI genes since there are dominant CMTI pedigrees who do not have abnormalities at the known chromosome 1 and 17 locations (CMT Type IC). In addition, other investigators have reported X-linked forms of CMTI at the region of Xq13-21, and Xq26. Since CMT is not life-threatening, rarely severely disabling, and has no specific treatment, it is unclear how the results of this CMT Type I DNA test, which can not predict the severity of the disease, would affect family planning. Moreover, because of its low detection rate, the CMT Type I DNA test appears to be inferior to the conventional means of diagnosis through physical examination, family history, electromyography and nerve conduction velocity studies. Thus, the sole value of genetic testing for CMTIA is to establish the diagnosis and to distinguish this from other causes of neuropathy. Familial amyotrophic lateral sclerosis (SOD1 Mutation) Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease involving both the upper motor neurons (UMN) and lower motor neurons (LMN). UMN signs include hyperreflexia, extensor plantar response, increased muscle tone, and weakness in a topographical representation. LMN signs include weakness, muscle wasting, hyporeflexia, muscle cramps, and fasciculations. In the early stage of the disease, the clinical aspects of ALS can vary. Affected individuals typically present with asymmetric focal weakness of the extremities (stumbling or poor handgrip) or bulbar findings (dysarthria, dysphagia). Other findings include muscle fasciculations, muscle cramps, and lability of affect but not necessarily mood. Regardless of initial symptoms, atrophy and weakness eventually affect other muscles. Approximately 5,000 people in the U. S. are diagnosed with AML each year. Most people with ALS have a form of the condition that is described as sporadic or non-inherited. The cause of sporadic ALS is largely unknown but probably involves a combination of genetic and environmental factors. About 10 of people with ALS have a familial form of the condition, which is caused by an inherited genetic mutation, usually as an autosomal dominant trait. The mean age of onset of ALS in individuals with no known family history is 56 years and in familial ALS it is 46 years. The diagnosis of ALS is based on clinical features, electrodiagnostic testing (EMG), and exclusion of other health conditions with related symptoms. At present, genetic testing in ALS has no value in making the diagnosis. The only genetic test currently available detects the SOD1 mutation. Since only 20 of familial ALS patients will test positively for an SOD1 mutation, this test has limited value in genetic counseling. Migrainous vertigo is a term used to describe episodic vertigo in patients with a history of migraines or with other clinical features of migraine. Approximately 20 to 33 of migraine patients experience episodic vertigo. The underlying cause of migrainous vertigo is not very well understood. There are no confirmatory diagnostic tests or susceptible genes associated with migrainous vertigo. Other conditions, specifically Menieres disease and structural and vascular brainstem disease, must be excluded (Black, 2006). At this time, there are no susceptibility genes that have been unequivocally associated with prostate cancer predisposition. Genetic testing for prostate cancer is currently available only within the context of a research study. A special report on prostate cancer genetics by the BlueCross BlueShield Association Technology Evaluation Center (BCBSA, 2008) stated that single-nucleotide polymorphisms (SNPs) do not predict certainty of disease, nor do they clearly predict aggressive versus indolent disease. The report noted that, while the monitoring of high-risk men may improve outcomes, it is also possible that these could be offset by the harms of identifying and treating additional indolent disease. Type 2 diabetes Available evidence has shown that screening for a panel of gene variants associated with type 2 diabetes does not substantially improve prediction of risk for the disease than an assessment based on traditional risk factors. Available evidence suggests that both genetic and environmental factors play a role in the development of type 2diabetes. Recent genetic studies have identified 18 gene variants that appear to increase the risk for type 2 diabetes. A study reported in the New England Journal of Medicine evaluated the potential utility of genetic screening in predicting future risk of type 2 diabetes (Meigs et al, 2008). The investigators analyzed records from the Framingham Offspring Study, which follows a group of adult children of participants of the original Framingham Heart study, to evaluate risk factors for the development of cardiovascular disease, including diabetes. Full genotype results for the 18 gene variants as well as clinical outcomes were available for 2,377 participants, 255 of whom developed type 2 diabetes during 28 years of follow-up. Each participant was assigned a genotype score, based on the number of risk-associated gene copies inherited. The investigators compared the predictive value of the genotype score to that of family history alone or of physiological risk factors. Overall, the genetic score was 17.7 among those who developed diabetes and 17.1 among those who did not. The investigators found that, while the genetic score did help predict who would develop diabetes, once other known risk factors were taken into consideration, it offered little additional predictive power. The investigators concluded that: the genotype score resulted in the appropriate risk reclassification of, at most, 4 of the subjects, compared with risk estimates based on age, sex, blood lipids, body mass index, family history, and other standard risk factors. The investigators reported that our findings underscore the view that identification of adverse phenotypic characteristics remains the cornerstone of approaches to predicting the risk of type 2 diabetes, the authors said. A similar study among Swedish and Finnish patients, published in the same issue of the New England Journal of Medicine, also found only a small improvement in risk estimates when genetic factors were added to traditional risk factors (Lyssenko et al, 2008). The OncoVue breast cancer risk test The OncoVue breast cancer risk test (Intergenetics, Inc. Oklahoma City, OK) is a genetic-based breast cancer risk test that incorporates both individualized genetic-based single nucleotide polymorphisms (SNPs) and personal history measures to arrive at an estimate of a womanrsquos breast cancer risk at various stages in her life. Cells that are collected from the inside of the cheek are analyzed using thousands of proprietary (Intergenetic, Inc.) combinations of multiple genes. The genetic information and the data from the medical history are combined to assign a numeric value that tells a womans lifetime risk of developing breast cancer. Her OncoVue risk test will tell her if she is standard, moderate or high risk for developing breast cancer during each stage of her life. OncoVue is based on an un-published case-controlled associative study that examined common genetic polymorphisms and medical history variables. Currently, 117 common polymorphisms (mostly SNPs) located in over 87 genes believed to alter breast cancer risk are examined. Most result in amino acid changes in the proteins encoded by the genes in which they occur. The medical history variables include answers to questions concerning womenrsquos reproductive histories, family histories of cancer and a few other questions related to general health. There are no published controlled studies on the OncoVue breast cancer risk test in the peer-reviewed medical literature. Gail (2009) evaluated the value of adding SNP genotypes to a breast cancer risk model. Criteria that are based on 4 clinical or public health applications were used to compare the National Cancer Institutes Breast Cancer Risk Assessment Tool (BCRAT) with BCRATplus7, which includes 7 SNPs previously associated with breast cancer. Criteria included number of expected life-threatening events for the decision to take tamoxifen, expected decision losses (in units of the loss from giving a mammogram to a woman without detectable breast cancer) for the decision to have a mammogram, rates of risk re-classification, and number of lives saved by risk-based allocation of screening mammography. For all calculations, the following assumptions were made: Hardy-Weinberg equilibrium, linkage equilibrium across SNPs, additive effects of alleles at each locus, no interactions on the logistic scale among SNPs or with factors in BCRAT, and independence of SNPs from factors in BCRAT. Improvements in expected numbers of life-threatening events were only 0.07 and 0.81 for deciding whether to take tamoxifen to prevent breast cancer for women aged 50 to 59 and 40 to 49 years, respectively. For deciding whether to recommend screening mammograms to women aged 50 to 54 years, the reduction in expected losses was 0.86 if the ideal breast cancer prevalence threshold for recommending mammography was that of women aged 50 to 54 years. Cross-classification of risks indicated that some women classified by BCRAT would have different classifications with BCRATplus7, which might be useful if BCRATplus7 was well calibrated. Improvements from BCRATplus7 were small for risk-based allocation of mammograms under costs constraints. The author reported that the gains from BCRATplus7 were small in the applications examined and that models with SNPs, such as BCRATplus7, have not been validated for calibration in independent cohort data. The author concluded that additional studies are needed to validate a model with SNPs and justify its use. There is insufficient evidence on the effectiveness of the OncoVue breast cancer risk test in determining a womanrsquos breast cancer risk at various stages in her life. The phosphatase and tensin homolog (PTEN) gene test Phosphatase and tensin homolog (PTEN) hamartoma tumor syndrome is an autosomal dominant group of disorders with significant clinical overlap, most notably predisposition to hamartomatous polyposis of the gastro-intestinal tract. Laurent-Puig et al (2009) stated that the occurrence of KRAS mutation is predictive of non-response and shorter survival in patients treated by anti-epidermal growth factor receptor (anti-EGFR) antibody for metastatic colorectal cancer (mCRC), leading the European Medicine Agency to limit its use to patients with wild-type KRAS tumors. However, only 50 of these patients will benefit from treatment, suggesting the need to identify additional biomarkers for cetuximab-based treatment efficacy. These investigators retrospectively collected tumors from 173 patients with mCRC. All but 1 patient received a cetuximab-based regimen as second-line or greater therapy. KRAS and BRAF status were assessed by allelic discrimination. EGFR amplification was assessed by chromogenic in situ hybridization and fluorescent in situ hybridization, and the expression of PTEN was assessed by immunochemistry. In patients with KRAS wild-type tumors (n 116), BRAF mutations (n 5) were weakly associated with lack of response (p 0.063) but were strongly associated with shorter progression-free survival (p lt 0.001) and shorter overall survival (OS p lt 0.001). A high EGFR polysomy or an EGFR amplification was found in 17.7 of the patients and was associated with response (p 0.015). PTEN null expression was found in 19.9 of the patients and was associated with shorter OS (p 0.013). In multi-variate analysis, BRAF mutation and PTEN expression status were associated with OS. The authors concluded that BRAF status, EGFR amplification, and cytoplasmic expression of PTEN were associated with outcome measures in KRAS wild-type patients treated with a cetuximab-based regimen. They stated that more studies in clinical trial cohorts are needed to confirm the clinical utility of these markers. Siena et al (2009) noted that the monoclonal antibodies panitumumab and cetuximab that target the EGFR have expanded the range of treatment options for mCRC. Initial evaluation of these agents as monotherapy in patients with EGFR-expressing chemotherapy-refractory tumors yielded response rates of approximately 10 . The realization that detection of positive EGFR expression by immunostaining does not reliably predict clinical outcome of EGFR-targeted treatment has led to an intense search for alternative predictive biomarkers. Oncogenic activation of signaling pathways downstream of the EGFR, such as mutation of KRAS, BRAF, or PIK3CA oncogenes, or inactivation of the PTEN tumor suppressor gene is central to the progression of colorectal cancer. Tumor KRAS mutations, which may be present in 35 to 45 of patients with colorectal cancer, have emerged as an important predictive marker of resistance to panitumumab or cetuximab treatment. In addition, among colorectal tumors carrying wild-type KRAS, mutation of BRAF or PIK3CA or loss of PTEN expression may be associated with resistance to EGFR-targeted monoclonal antibody treatment, although these additional biomarkers require further validation before incorporation into clinical practice. Additional knowledge of the molecular basis for sensitivity or resistance to EGFR-targeted monoclonal antibodies will allow the development of new treatment algorithms to identify patients who are most likely to respond to treatment and could also provide rationale for combining therapies to overcome primary resistance. The use of KRAS mutations as a selection biomarker for anti-EGFR monoclonal antibody (e. g. panitumumab or cetuximab) treatment is the first major step toward individualized treatment for patients with mCRC. Epsilon-sarcoglycan gene (SCGE) deletion analysis Myoclonus-dystonia (M-D), an autosomal dominant inherited movement disorder, has been associated with mutations in the epsilon-sarcoglycan gene (SCGE) on 7q21. Raymond et al (2008) noted that M-D due to SGCE mutations is characterized by early onset myoclonic jerks, often associated with dystonia. Penetrance is influenced by parental sex, but other sex effects have not been established. In 42 affected individuals from 11 families with identified mutations, these researchers found that sex was highly associated with age at onset regardless of mutation type the median age onset for girls was 5 years versus 8 years for boys (p lt 0.0097). Moreover, the authors found no association between mutation type and phenotype. Ritz et al (2009) stated that various mutations within the SGCE gene have been associated with M-D, but mutations are detected in only about 30 of patients. The lack of stringent clinical inclusion criteria and limitations of mutation screens by direct sequencing might explain this observation. Eighty-six M-D index patients from the Dutch national referral center for M-D underwent neurological examination and were classified according to previously published criteria into definite, probable and possible M-D. Sequence analysis of the SGCE gene and screening for copy number variations were performed. In addition, screening was carried out for the 3 bp deletion in exon 5 of the DYT1 gene. Based on clinical examination, 24 definite, 23 probable and 39 possible M-D patients were detected. Thirteen of the 86 M-D index patients carried a SGCE mutation: 7 nonsense mutations, 2 splice site mutations, 3 missense mutations (2 within 1 patient) and 1 multi-exonic deletion. In the definite M-D group, 50 carried an SGCE mutation and 1 single patient in the probable group (4 ). One possible M-D patient showed a 4 bp deletion in the DYT1 gene (c.934937delAGAG). The authors concluded that mutation carriers were mainly identified in the definite M-D group. However, in 50 of definite M-D cases, no mutation could be identified. Home genetic tests Walker (2010) stated that according to an undercover investigation by the Government Accountability Office (GAO), home genetic tests often provide incomplete or misleading information to consumers. For the GAO investigation, investigators purchased 10 tests each from 4 different direct-to-consumer genetic tests companies: 23andMe, deCode Genetics, Navigenics, and Pathway Genomics. Five saliva donors each sent 2 DNA samples to each company. In one sample, the donor used his or her real personal and medical information, and for the second sample, they developed faux identifying and medical information. The results, according to the GAO, were far from precise. For example, a donor was told by a company that he had a below average risk of developing hypertension, but a second company rated his risk as average, while a third company, using DNA from the same donor, said the sample revealed an above average risk for hypertension. In some cases, the results conflicted with the donors real medical condition. None of the genetic tests currently offered to consumers has undergone FDA pre-market review. Familial Cold Autoinflammatory Syndrome Familial cold autoinflammatory syndrome (FCAS), also known as familial cold urticaria (FCU), is an autosomal dominant condition characterized by rash, conjunctivitis, feverchills and arthralgias elicited by exposure to cold -- sometimes temperatures below 22deg C (72deg F). It is rare and is estimated as having a prevalence of 1 per million people and mainly affects Americans and Europeans. Familial cold autoinflammatory syndrome is one of the cryopyrin-associated periodic syndromes (CAPS) caused by mutations in the CIAS1NALP3 (also known as NLRP3) gene at location 1q44. Familial cold autoinflammatory syndrome shares symptoms, and should not be confused, with acquired cold urticaria, a more common condition mediated by different mechanisms that usually develop later in life and are rarely inherited. There is insufficient evidence to support the use of genetic testing in the management of patients with FCASFCU. UpToDate reviews on Cold urticaria (Maurer, 2011) and Cryopyrin-associated periodic syndromes and related disorders (Nigrovic, 2011) do not mention the use of genetic testing. Santome Collazo et al (2010) noted that congenital adrenal hyperplasia (CAH) is not an infrequent genetic disorder for which mutation-based analysis for CYP21A2 gene is a useful tool. An UpToDate review on Diagnosis of classic congenital adrenal hyperplasia due to 21-hydroxylase deficiency (Merke, 2011) states that genetic testing also can be used to evaluate borderline cases. Genetic testing detects approximately 95 percent of mutant alleles. Furthermore, the Endocrine Societys clinical practice guideline on congenital adrenal hyperplasia (Speiser et al, 2010) suggested genotyping only when results of the adrenocortical profile following cosyntropin stimulation test are equivocal or for purposes of genetic counseling. The Task Force recommends that genetic counseling be given to parents at birth of a CAH child, and to adolescents at the transition to adult care. Wappler (2010) stated that malignant hyperthermia (MH)-susceptible patients have an increased risk during anesthesia. The aim of this review was to present current knowledge about pathophysiology and triggers of MH as well as concepts for safe anesthesiological management of these patients. Trigger substances and mechanisms have been well-defined to date. Anesthesia can be safely performed with i. v. anesthetics, nitrous oxide, non-depolarizing muscle relaxants, local anesthetics as well as xenon. Attention must be directed to the preparation of the anesthetic machine because modern work-stations need longer cleansing times than their predecessors. Alternatively, activated charcoal might be beneficial for elimination of volatile anesthetics. Day case surgery can be performed in MH-susceptible patients, if all safety aspects are regarded. Whether there is an association between MH susceptibility and other disorders is still a matter of debate. The authors concluded that the incidence of MH is low, but the prevalence can be estimated as up to 1: 3,000. Because MH is potentially lethal, it is relevant to establish management concepts for peri-operative care in susceptible patients. This includes pre-operative genetic and in-vitro muscle contracture test (IVCT), preparation of the anesthetic work-station, use of non-triggering anesthetics, adequate monitoring, availability of sufficient quantities of dantrolene and appropriate post-operative care. Taking these items into account, anesthesia can be safely performed in susceptible patients. Moreover, an UpToDate review on Susceptibility to malignant hyperthermia (Litman, 2011) states that the contracture test is performed at specific centers around the world (four in the United States). Following testing, the referring physician receives a report indicating whether testing was positive, negative, or equivocal. Positive or equivocal results should be followed-up with genetic testing. Referral information can be found on the Malignant Hyperthermia Association of the United States (MHAUS) website. Genetic testing for MH is indicated in the following groups: Patients with a positive or equivocal contracture test to determine the presence of a specific mutation. Individuals with a positive genetic test for MH in a family member. Patients with a clinical history suspicious for MH (acute MH episode, masseter muscle rigidity, post-operative myoglobinuria, heat or exercise induce rhabdomyolysis) who are unable or unwilling to undergo contracture testing. Licis et al (2011) stated that sleep-walking is a common and highly heritable sleep disorder. However, inheritance patterns of sleep-walking are poorly understood and there have been no prior reports of genes or chromosomal localization of genes responsible for this disorder. These researchers described the inheritance pattern of sleep-walking in a 4-generation family and identified the chromosomal location of a gene responsible for sleep-walking in this family. A total of 9 affected and 13 unaffected family members of a single large family were interviewed and DNA samples collected. Parametric linkage analysis was performed. Sleep-walking was inherited as an autosomal dominant disorder with reduced penetrance in this family. Genome-wide multi-point parametric linkage analysis for sleep-walking revealed a maximum logarithm of the odds score of 3.44 at chromosome 20q12-q13.12 between 55.6 and 61.4 cM. The authors described the first genetic locus for sleep-walking at chromosome 20q12-q13.12 and concluded that sleep-walking may be transmitted as an autosomal dominant trait with reduced penetrance. In an editorial that accompanied the afore-metnioned study, Dogu and Pressman (2011) noted that according to currently accepted evidence-based theories, the occurrence of sleepwalking requires genetic predisposition, priming factors such as severe sleep deprivation or stress, and, in addition, a proximal trigger factor such as noise or touch. These factors form the background for a ldquoperfect storm, rdquo all of which must occur before a sleepwalking episode will occur. Hereditary factors likely play an important role, with recessive and multifactorial inheritance patterns having been reported. A recent genetic study has shown that the HLADQB105 Ser74 variant is a major susceptibility factor for sleepwalking in familial cases, but this finding has yet to be replicated. Another study attempted to find a causal relationship between sleepwalking and sleep-disordered breathing in cosegregated families of both disorders. However, this study was limited by the absence of molecular data. The current diagnosis of sleepwalking is based almost entirely on clinical history. There are no objective, independent means of confirming the diagnosis. Additionally, treatment of sleepwalking is symptomatic, aimed at suppressing arousal or reducing deep sleep. identification of causative genes may eventually permit development of an independent test and treatments aimed at the underlying causes of this disorder. RetnaGene AMD (Sequenom Center for Molecular Medicine) is a laboratory developed genetic test to assess the risk of developing choroidal neovascularization (CNV), the wet form of age-related macular degeneration (AMD), a common eye disorder of the elderly that can lead to blindness. The test identifies at-risk Caucasians, age 60 and older. A report of the American Academy of Ophthalmology (Stone, et al. 2012) recommends avoidance of routine genetic testing for genetically complex disorders like age-related macular degeneration and late-onset primary open-angle glaucoma until specific treatment or surveillance strategies have been shown in one or more published clinical trials to be of benefit to individuals with specific disease-associated genotypes. The report recommends that, in the meantime, genotyping of such patients should be confined to research studies. The report stated that complex disorders (e. g. age-related macular degeneration and glaucoma) tend to be more common in the population than monogenic diseases, and the presence of any one of the disease-associated variants is not highly predictive of the development of disease. The report stated that, in many cases, standard clinical diagnostic methods like biomicroscopy, ophthalmoscopy, tonography, and perimetry will be more accurate for assessing a patientrsquos risk of vision loss from a complex disease than the assessment of a small number of genetic loci. The report said that genetic testing for complex diseases will become relevant to the routine practice of medicine as soon as clinical trials can demonstrate that patients with specific genotypes benefit from specific types of therapy or surveillance. The report concluded that, until such benefit can be demonstrated, the routine genetic testing of patients with complex eye diseases, or unaffected patients with a family history of such diseases, is not warranted. Central core disease (CCD) also known as central core myopathy and Shy-Magee syndrome, is an inherited neuromuscular disorder characterized by central cores on muscle biopsy and clinical features of a congenital myopathy. Prevalence is unknown but the condition is probably more common than other congenital myopathies. CCD typically presents in infancy with hypotonia and motor developmental delay and is characterized by predominantly proximal weakness pronounced in the hip girdle orthopedic complications are common and malignant hyperthermia susceptibility (MHS) is a frequent complication. Malignant hyperthermia (MH) or malignant hyperpyrexia is a rare but severe pharmacogenetic disorder that occurs when patients undergoing anesthesia experience a hyperthermic reaction when exposed to certain anesthetic agents. Anesthetic agents that may trigger MH are desflurane, enflurane, halothane, isoflurane, sevoflurane, and suxamethonium chloride. MH usually occurs in the operating theater, but can occur at anytime during anesthesia and up to an hour after discontinuation. CCD and MHS are allelic conditions both due to (predominantly dominant) mutations in the skeletal muscle ryanodine receptor (RYR1) gene, encoding the principal skeletal muscle sarcoplasmic reticulum calcium release channel (RyR1). Altered excitability andor changes in calcium homeostasis within muscle cells due to mutation-induced conformational changes of the RyR protein are considered the main pathogenetic mechanism(s). The diagnosis of CCD is based on the presence of suggestive clinical features and central cores on muscle biopsy muscle MRI may show a characteristic pattern of selective muscle involvement and aid the diagnosis in cases with equivocal histopathological findings. Mutational analysis of the RYR1 gene may provide genetic confirmation of the diagnosis. Further evaluation of the underlying molecular mechanisms may provide the basis for future rational pharmacological treatment. The reference standard test for establishing a clinical diagnosis of MHS is the caffeine halothane contracture test (CHCT) in the United States, and the in vitro contracture test (IVCT) in Europe and Australasia. The CHCT and IVCT are similar and measure the muscle contracture in the presence of the anesthetic halothane and caffeine. Both tests categorize patients as being MHS, MH equivocal (MHE), or MH negative (MHN). These tests are invasive and must be performed using a skeletal muscle biopsy that is lt 5 hours old. Sequence variants in the ryanodine receptor 1 (skeletal) (RYR1) gene have been shown to be associated with MH susceptibility (MHS) and are found in up to 80 of patients with confirmed MH, usually with an autosomal dominant pattern of inheritance. Although additional genetic loci have been associated with MH, the contribution of these other loci to MH is low. Genetic testing for RYR1 sequence variants from commercial providers is performed by polymerase chain reaction (PCR) followed by direct sequencing. Genetic tests for RYR1 sequence variants can be performed to either identify sequence variants in genetic hot spots of the RYR1 gene that cover all exons on which causative MH variants can be found, or for screening of sequence variants across the entire 106 exons of the RYR1 gene. Examples of commercially available tests are: Malignant HyperthermiaCentral Core Disease (570-572) RYR1 Sequencing (Prevention Genetics) Malignant hyperthermia (RYR1 gene sequenced analysis, partial) (University of Pittsburgh Medical Center, Division of Molecular Diagnostics UPMC Molecular Diagnostics). Hereditary Hemorrhagic Telangiectasia Hereditary hemorrhagic telangiectasia (HHT), also called Osler-Weber-Rendu syndrome, is an autosomal dominant trait disorder that results in the development of multiple abnormalities in the blood vessels. Some arterial vessels flow directly into veins rather than into the capillaries resulting in arteriovenous malformations. When they occur in vessels near the surface of the skin, where they are visible as red markings, they are known as telangiectases (the singular is telangiectasia). Nosebleeds are very common in people with HHT, and more serious problems may arise from hemorrhages in the brain, liver, lungs, or other organs. Forms of HHT include type 1, type 2, type 3, and juvenile polyposishereditary hemorrhagic telangiectasia syndrome. People with type 1 tend to develop symptoms earlier than those with type 2, and are more likely to have blood vessel malformations in the lungs and brain. Type 2 and type 3 may be associated with a higher risk of liver involvement. Women are more likely than men to develop blood vessel malformations in the lungs with type 1, and are also at higher risk of liver involvement with both type 1 and type 2. Individuals with any form of hereditary hemorrhagic telangiectasia, however, can have any of these problems. Genetic testing utilizes a blood test to determine whether or not an at risk individual carries the genes responsible for the development of disease. Mutations in two genes, endoglin and ALK-1, have been shown to be responsible for pure HHT, with the disease subtypes designated HHT1 and HHT2. Mutations in Smad4 result in a juvenile polyposis-HHT overlap syndrome. In 2010, Shah and group wrote that hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant disorder with age-dependent penetrance characterized by recurrent epistaxis, mucocutaneous telangiectasias, and visceral arteriovenous malformations (AVMs). AVMs can occur in multiple organs, including brain, liver, and lungs, and are associated with a large portion of disease morbidity. Pulmonary AVMs (PAVMs) can be asymptomatic or manifest as dyspnea and hypoxemia secondary to shunting. The presence of untreated PAVMs can also lead to transient ischemic attacks, stroke, hemothorax, and systemic infection, including cerebral abscesses. Definitive diagnosis is made when three or more clinical findings are present, which include the features mentioned above and a first-degree relative diagnosed with HHT. Diagnosis is suspected when two findings are present. Genetic testing can help confirm diagnosis. Mutations in three genes are known to cause disease: ENG, ACVRL1, and SMAD4. Genetic testing involves sequence and duplicationdeletion analysis and identifies a mutation in roughly 80 of patients with clinical disease. The textbook Flint: Cummings Otolaryngology: Head amp Neck Surgery (2010) states that genetic testing is available for prenatal diagnosis of hereditary hemorrhagic telangiectasia. This is important, because catastrophic hemorrhage can occur in children with clinically silent disease, thus screening imaging for cerebral and pulmonary arteriomalformations is indicated in children who have a family history. According to the textbook of Feldman: Sleisenger and Fordtrans Gastrointestinal and Liver Disease (2010), genetic testing to detect mutations in the ENG, ALK-1, or MAHD4 genes may be helpful in selected cases. Patients suspected of having HHT should be screened for cerebral and pulmonary arteriovenous malformations (AVMs), and family members of the patient should consider genetic testing. The textbook Cassidy: Management of Genetic Syndromes (2005), reports that, to date, mutation testing has not been widely used in the diagnosis of HHT. However, mutations in either ALK1 or endoglin have been demonstrated in over 70 of unrelated, affected individuals tested using direct gene sequencing of genomic DNA. Genetic testing for HHT will have an important role in both the testing of individuals for whom the diagnosis is uncertain and in presymptomatic testing of young adults at risk of HHT. In 2006, Bossler and group describe the results of mutation analysis on a consecutive series of 200 individuals undergoing clinical genetic testing for HHT. The observed sensitivity of mutation detection was similar to that in other series with strict ascertainment criteria. A total of 127 probands were found, with sequence changes consisting of 103 unique alterations, 68 of which were novel. In addition, eight intragenic rearrangements in the ENG gene and two in the ACVRL1 gene were identified in a subset of coding sequence mutation-negative individuals. Most individuals tested could be categorized by the number of HHT diagnostic criteria present. Surprisingly, almost 50 of the cases with a single symptom were found to have a significant sequence alteration three of these reported only nosebleeds. The authors concluded, ldquogenetic testing can confirm the clinical diagnosis in individuals and identify presymptomatic mutation carriers. As many of the complications of HHT disease can be prevented, a confirmed molecular diagnosis provides an opportunity for early detection of AVMs and management of the disease. rdquo Spinal Muscular Atrophy Spinal Muscular Atrophy (SMA) is a group of inherited diseases that cause muscle damage and weakness, which get worse over time and eventually lead to death. The most severe form is SMA type I, also called Werdnig-Hoffman disease. Infants with SMA type II have less severe symptoms during early infancy, but become weaker with time. SMA type III is the least severe form of the disease. Rarely, SMA may begin in adulthood and is usually a milder form of the disease. Spinal muscular atrophy (SMA), which has an estimated prevalence of 1 in 10,000, is characterized by proximal muscle weakness resulting from the degeneration of anterior horn cells in the spinal cord. SMA type I is typically diagnosed at birth or within the first 3 to 6 months of life affected children are unable to sit unassisted and usually die from respiratory failure within 2 years. Those with SMA type II, which is diagnosed before 18 months of age, are unable to stand or walk unaided, although they may be able to sit and may survive beyond age 4. The clinical features of SMA types III and IV are milder and manifest after 18 months of age or in adulthood, respectively. SMA is inherited in an autosomal recessive manner and is caused by alterations in the survival motor neuron 1 (SMN1) gene located on chromosome 5 at band q12.2 to q13.3. Approximately 95 of SMA patients have the condition as a result of a homozygous deletion involving at least exon 7 of SMN1. Approximately 5 are compound heterozygotes, with a deletion in 1 allele of SMN1 and a subtle intragenic variation in the other. SMN2, a gene nearly identical in sequence to SMN1, is located in the same highly repetitive region on chromosome 5. Although it does not cause SMA, it has been shown to modify the phenotype of the condition those with the milder SMA types II or III tend to have more copies of SMN2 than those with the severe type I. SMN1 deletions are detected by polymerase chain reaction (PCR) amplification of exon 7 of the SMN genes, followed by restriction fragment length polymorphism (RFLP) analysis. Following amplification, exon 7 of SMN2 will be cut with the restriction enzyme DraI, while exon 7 of SMN1 will remain intact. SMA patients with homozygous SMN1 deletions will show an absence of the uncut SMN1 exon 7 PCR products. To detect heterozygous SMN1 deletions in SMA carriers or compound heterozygotes, quantitative PCR (qPCR) is performed. To identify subtle intragenic variations in SMA patients found to have only 1 copy of the deletion, the SMN1 gene is typically sequenced. Candidates for diagnostic testing include infants, children, and adults with generalized hypotonia and proximal muscle weakness of unknown etiology. Carrier testing may be offered to couples considering pregnancy, including those with a family history of SMA, and prenatal diagnosis should be made available to all identified carriers. Bedard et al (2011) noted that patients with heterotaxy have characteristic cardiovascular malformations, abnormal arrangement of their visceral organs, and midline patterning defects that result from abnormal left-right patterning during embryogenesis. Loss of function of the transcription factor ZIC3 causes X-linked heterotaxy and isolated congenital heart malformations and represents one of the few known monogenic causes of congenital heart disease. The birth incidence of heterotaxy-spectrum malformations is significantly higher in males, but the authorsrsquo previous work indicated that mutations within ZIC3 did not account for the male over-representation. Therefore, cross species comparative sequence alignment was used to identify a putative novel fourth exon, and the existence of a novel alternatively spliced transcript was confirmed by amplification from murine embryonic RNA and subsequent sequencing. This transcript, termed Zic3-B, encompasses exons 1, 2, and 4 whereas Zic3-A encompasses exons 1, 2, and 3. The resulting protein isoforms are 466 and 456 amino acid residues respectively, sharing the first 407 residues. Importantly, the last 2 amino acids in the 5th zinc finger DNA binding domain are altered in the Zic3-B isoform, indicating a potential functional difference that was further evaluated by expression, subcellular localization, and transactivation analyses. The temporo-spatial expression pattern of Zic3-B overlaps with Zic3-A in-vivo, and both isoforms are localized to the nucleus in-vitro. Both isoforms can transcriptionally activate a Gli binding site reporter, but only ZIC3-A synergistically activates upon co-transfection with Gli3, suggesting that the isoforms are functionally distinct. The authors concluded that screening 109 familial and sporadic male heterotaxy cases did not identify pathogenic mutations in the newly identified fourth exon and larger studies are necessary to establish the importance of the novel isoform in human disease. Tariq et al (2011) heterotaxy-spectrum cardiovascular disorders are challenging for traditional genetic analyses because of clinical and genetic heterogeneity, variable expressivity, and non-penetrance. In this study, high-resolution single nucleotide polymorphisms (SNPs) genotyping and exon-targeted array comparative genomic hybridization (CGH) platforms were coupled to whole-exome sequencing to identify a novel disease candidate gene. SNP genotyping identified absence-of-heterozygosity regions in the heterotaxy proband on chromosomes 1, 4, 7, 13, 15, 18, consistent with parental consanguinity. Subsequently, whole-exome sequencing of the proband identified 26,065 coding variants, including 18 non-synonymous homozygous changes not present in dbSNP132 or 1000 Genomes. Of these 18, only 4 -- 1 each in CXCL2, SHROOM3, CTSO, RXFP1 -- were mapped to the absence-of-heterozygosity regions, each of which was flanked by more than 50 homozygous SNPs, confirming recessive segregation of mutant alleles. Sanger sequencing confirmed the SHROOM3 homozygous missense mutation and it was predicted as pathogenic by 4 bio-informatic tools. SHROOM3 has been identified as a central regulator of morphogenetic cell shape changes necessary for organogenesis and can physically bind ROCK2, a rho kinase protein required for left-right patterning. Screening 96 sporadic heterotaxy patients identified 4 additional patients with rare variants in SHROOM3. The authors concluded that using whole exome sequencing, the authors identify a recessive missense mutation in SHROOM3 associated with heterotaxy syndrome and identify rare variants in subsequent screening of a heterotaxy cohort, suggesting SHROOM3 as a novel target for the control of left-right patterning. This study revealed the value of SNP genotyping coupled with high-throughput sequencing for identification of high yield candidates for rare disorders with genetic and phenotypic heterogeneity. Also, UpToDate reviews on ldquoClinical manifestations, pathophysiology, and diagnosis of atrioventricular (AV) canal defectsrdquo (Fleishman and Tugertimur, 2013) and ldquoCongenital heart disease (CHD) in the newborn: Presentation and screening for critical CHDrdquo (Altman, 2013) do not mention the use of genetic testing as a management tool. Genetic testing may be used to analyze DNA to detect gene mutations to assist in diagnosing a genetic disorder in individuals who exhibit disease signs and symptoms. It may also be used to determine if an asymptomatic individual may be at risk for developing a genetic disorder since an individuals risk might be higher if genes are inherited that cause or increase susceptibility to a disorder. Genetic testing for disease risk, also referred to as predictive, presymptomatic or predispositional genetic testing, may be offered to asymptomatic individuals with a family history of the genetic disorder or if a disease-causing, or pathogenic, mutation has been identified in an affected relative. Testing may be offered for conditions such as Duchenne muscular dystrophy (DMD), Becker muscular dystrophy (BMD), myotonic dystrophy or spinal muscular atrophy (SMA). Becker Muscular Dystrophy (BMD) is an inherited disorder that involves slowly progressing muscle weakness of the legs and pelvis. BMD is similar to DMD, though it is less common and progresses at a slower rate. Duchenne Muscular Dystrophy (DMD) is the most severe form of muscular dystrophy, DMD usually affects young boys and causes progressive muscle weakness, usually beginning in the legs. Facioscapulohumeral Muscular Dystrophy (FSHD) is a disorder characterized by muscle weakness and atrophy. The areas of the body most often affected include the muscles of the face, shoulder blades and upper arms. Myotonic Dystrophy is an inherited neuromuscular disorder characterized by progressive muscle weakness and wasting. Spinocerebellar ataxia (SCA) is an inherited progressive neurodegenerative disease. It is characterized by dysfunction of the cerebellum, the part of the brain that controls walking and balance and is manifested by progressive uncoordinated movements (ataxia). There are at least 25 different types of SCA conditions. Diagnostic genetic testing may be used for individuals with signs and symptoms of SCA. Genetic testing has also been proposed for at-risk individuals with a family history of SCA. Mitochondrial Recessive Ataxia Syndrome Lee et al (2007) stated that spino-cerebellar ataxia (SCA) is a heterogeneous group of neurodegenerative disorders with common features of adult-onset cerebellar ataxia. Many patients with clinically suspected SCA are subsequently diagnosed with common SCA gene mutations. Previous reports suggested some common mitochondrial DNA (mtDNA) point mutations and mitochondrial DNA polymerase gene (POLG1) mutations might be additional underlying genetic causes of cerebellar ataxia. These researchers tested whether mtDNA point mutations A3243G, A8344G, T8993G, and T8993C, or POLG1 mutations W748S and A467T are found in patients with adult-onset ataxia who did not have common SCA mutations. A total of 476 unrelated patients with suspected SCA underwent genetic testing for SCA 1, 2, 3, 6, 7, 8, 10, 12, 17, and DRPLA gene mutations. After excluding these SCA mutations and patients with paternal transmission history, 265 patients were tested for mtDNA mutations A3243G, A8344G, T8993G, T8993C, and POLG1 W748S and A467T mutations. No mtDNA A3243G, A8344G, T8993G, T8993C, or POLG1 W748S and A467T mutation was detected in any of the 265 ataxia patients, suggesting that the upper limit of the 95 confidence interval (CI) for the prevalence of these mitochondrial mutations in Chinese patients with adult-onset non-SCA ataxia is no higher than 1.1 . The authors concluded that the mtDNA mutations A3243G, A8344G, T8993G, T8993C, or POLG1 W748S and A467T are very rare causes of adult-onset ataxia in Taiwan. Routine screening for these mutations in ataxia patients with Chinese origin is of limited clinical value. Gramstad et al (2009) noted that mutations in the catalytic subunit of polymerase gamma (POLG1) produce a wide variety of neurological disorders including a progressive ataxic syndrome with epilepsy: mitochondrial SCA and epilepsy (MSCAE). The authorsrsquo earlier studies of patients with this syndrome raised the possibility of more prominent right than left hemisphere dysfunction. To investigate this in more detail, 8 patients (6 women, 2 men mean age of 22.3 years) were studied. All completed an intelligence test (Wechsler Adult Intelligence Scale WAIS), and 4 were also given memory tests and a comprehensive neuropsychological test battery. Patients with MSCAE showed significant cognitive dysfunction. Mean Verbal IQ (84.3) was significantly better than Performance IQ (71.8) (t 5.23, p 0.001), but memory testing and neuropsychological testing failed to detect a consistent unilateral dysfunction. The authors concluded that further studies are needed to define the profile and development of cognitive symptoms in this disorder. Isohanni et al (2011) stated that mitochondrial DNA polymerase gamma (POLG1) mutations in children often manifest as Alpers syndrome, whereas in adults, a common manifestation is mitochondrial recessive ataxia syndrome (MIRAS) with severe epilepsy. Because some patients with MIRAS have presented with ataxia or epilepsy already in childhood, these investigators searched for POLG1 mutations in neurologic manifestations in childhood. They investigated POLG1 in 136 children, all clinically suspected to have mitochondrial disease, with one or more of the following: ataxia, axonal neuropathy, severe epilepsy without known epilepsy syndrome, epileptic encephalopathy, encephalohepatopathy, or neuropathologically verified Alpers syndrome. A total of 7 patients had POLG1 mutations, and all of them had severe encephalopathy with intractable epilepsy. Four patients had died after exposure to sodium valproate. Brain MRI showed parieto-occipital or thalamic hyper-intense lesions, white matter abnormality, and atrophy. Muscle histology and mitochondrial biochemistry results were normal in all. The authors concluded that POLG1 analysis should belong to the first-line DNA diagnostic tests for children with an encephalitis-like presentation evolving into epileptic encephalopathy with liver involvement (Alpers syndrome), even if brain MRI and morphology, respiratory chain activities, and the amount of mitochondrial DNA in the skeletal muscle are normal. POLG1 analysis should precede valproate therapy in pediatric patients with a typical phenotype. However, POLG1 is not a common cause of isolated epilepsy or ataxia in childhood. Tang et al (2012) determined the prevalence of MNGIE-like phenotype in patients with recessive POLG1 mutations. Mutations in the POLG1 gene, which encodes for the catalytic subunit of the mitochondrial DNA polymerase gamma essential for mitochondrial DNA replication, cause a wide spectrum of mitochondrial disorders. Common phenotypes associated with POLG1 mutations include Alpers syndrome, ataxia-neuropathy syndrome, and progressive external ophthalmoplegia (PEO). Mitochondrial neuro-gastro-intestinal encephalomyopathy (MNGIE) is an autosomal recessive disorder characterized by severe gastrointestinal dysmotility, cachexia, PEO andor ptosis, peripheral neuropathy, and leukoencephalopathy. MNGIE is caused by TYMP mutations. Rare cases of MNGIE-like phenotype have been linked to RRM2B mutations. Recently, POLG1 mutations were identified in a family with clinical features of MNGIE but no leukoencephalopathy. The coding regions and exon-intron boundaries of POLG1 were sequence analyzed in patients suspected of POLG1-related disorders. Clinical features of 92 unrelated patients with 2 pathogenic POLG1 alleles were carefully reviewed. Three patients, accounting for 3.3 of all patients with 2 pathogenic POLG1 mutations, were found to have clinical features consistent with MNGIE but no leukoencephalopathy. Patient 1 carries p. W748S and p. R953C patient 2 is homozygous for p. W748S, and patient 3 is homozygous for p. A467T. In addition, patient 2 has a similarly affected sibling with the same POLG1 genotype. POLG1 mutations may cause MNGIE-like syndrome, but the lack of leukoencephalopathy and the normal plasma thymidine favor POLG1 mutations as responsible molecular defect. Furthermore, UpToDate reviews on ldquoOverview of the hereditary ataxiasrdquo (Opal and Zoghbi, 2013a) and ldquoThe spinocerebellar ataxiasrdquo (Opal and Zoghbi, 2013b) do not mention the use of POLG1 genetic testing. National Comprehensive Cancer Networkrsquos clinical practice guideline on ldquoMyelodysplastic syndromesrdquo (2014) stated that further evaluations are necessary to establish the role of these genetic lesions on risk stratification systems in myelodysplastic syndrome. The guidelines stated that mutations in TET2 are among the most common mutations reported in patients with myelodysplastic syndromes (about 20 of cases). Mutations in SF3B1 are one of several common molecular abnormalities involving the RNA splicing machinery, occurring in 14.5 to 16.0 of MDS cases. Whole GenomeExome Sequencing and Genome - Wide Association Studies Whole genome sequencing (WGS) is a laboratory test utilized to determine the arrangement (sequence) of an individualrsquos entire genome at a single time. WGS allows the identification of mutations in the genome without having to target a gene or chromosome region based upon an individualrsquos personal or family history. WGS may also be referred to as full genome sequencing, complete genome sequencing or entire genome sequencing. Exome sequencing, also referred to as whole exome sequencing or WES, is an alternative to WGS. It is a laboratory test used to determine the sequence of the protein coding regions of the genome. The exome is the part of the genome that encodes protein, where roughly 85 of variants are known to contribute to diseases in humans. Exome sequencing has been proposed as a diagnostic method to identify these genetic variants in patients not diagnosed by traditional diagnostic and genetic testing approaches. Genome-wide association studies (GWAS), also referred to as genome-wide analysis, is a method to identify genes involved in human disease by comparing the genome of individuals who have a disease or condition to the genome of individuals without the disease or condition. GWAS are performed using microarrays to search the genome for small variations, called single nucleotide polymorphisms (SNPs, pronounced snips), that occur more often in individuals with a specific disorder than in those who do not have a disorder. A special report on ldquoExome sequencing for clinical diagnosis of patients with suspected genetic disordersrdquo by the BCBSArsquos Technology Evaluation Center (2013) stated that ldquoExome sequencing has the capacity to determine in a single assay an individualrsquos exomic variation profile, limited to most of the protein coding sequence of an individual (approximately 85 ), composed of about 20,000 genes, 180,000 exons (protein-coding segments of a gene), and constituting approximately 1 of the whole genome. It is believed that the exome contains about 85 of heritable disease-causing mutations hellip. Exome sequencing, relying on next-generation sequencing technologies, is not without challenges and limitations hellip. Detailed guidance from regulatory or professional organizations is under development, and the variability contributed by the different platforms and procedures used by clinical laboratories offering exome sequencing as a clinical service is unknown hellip. Currently, the diagnostic yield for single-gene disorders appears to be no greater than 50 and possibly less, depending on the patient population and provider expertise. Medical management options may be available for only a subset of those diagnosedrdquo. Strasser et al (2012) stated that Alport syndrome (ATS) is a type-IV collagen inherited disorder, caused by mutations in COL4A3 and COL4A4 (autosomal recessive) or COL4A5 (X-linked). Clinical symptoms include progressive renal disease, eye abnormalities and high-tone sensori-neural deafness. A renal histology very similar to ATS is observed in a subset of patients affected by mutations in MYH9, encoding non-muscle-myosin Type IIa -- a cytoskeletal contractile protein. MYH9-associated disorders (May-Hegglin anomaly, Epstein and Fechtner syndrome, and others) are inherited in an autosomal dominant manner and characterized by defects in different organs (including eyes, ears, kidneys and thrombocytes). These researchers described here a 6-year old girl with hematuria, proteinuria, and early sensori-neural hearing loss. The father of the patient is affected by ATS, the mother by isolated inner ear deafness. Genetic testing revealed a pathogenic mutation in COL4A5 (c.2605GgtA) in the girl and her father and a heterozygous mutation in MYH9 (c.4952TgtG) in the girl and her mother. The paternal COL4A5 mutation seems to account for the complete phenotype of ATS in the father and the maternal mutation in MYH9 for the inner ear deafness in the mother. It has been discussed that the interaction of both mutations could be responsible for both the unexpected severity of ATS symptoms and the very early onset of inner ear deafness in the girl. An UpToDate review on ldquoCongenital and acquired disorders of platelet functionrdquo (Coutre, 2013) states that ldquoGiant platelet disorders -- Inherited platelet disorders with giant platelets are quite rare (picture 2 and algorithm 1 and table 4). These include platelet glycoprotein abnormalities (e. g. Bernard-Soulier syndrome), deficiency of platelet alpha granules (e. g. gray platelet syndrome), the May-Hegglin anomaly, which also involves the presence of abnormal neutrophil inclusions (i. e. Dohle-like bodies), and some kindreds with type 2B von Willebrand disease (the Montreal platelet syndrome)rdquo. This review does not mention the use of genetic testing as a management tool for giant platelet disorders. UpToDate reviews on ldquoInborn errors of metabolism: Epidemiology, pathogenesis, and clinical featuresrdquo (Sutton, 2013a) and ldquoInborn errors of metabolism: Classificationrdquo (Sutton, 2013b) do not mention the use of genetic testing as a management tool. Very Long Chain AcylCoA Dehydrogenase Deficiency (VLCADD) An UpToDate review on ldquoNewborn screeningrdquo (Sielski, 2013) states that ldquoMS-MS tandem mass spectrometry detects more cases of inborn errors of metabolism than clinical diagnosis. In a study from New South Wales and the Australian Capital Territory, Australia, the prevalence of 31 inborn errors of metabolism affecting the urea cycle, amino acids (excluding PKU), organic acids, and fatty acid oxidation detected by MS-MS in 1998 to 2002 was 15.7 per 100,000 births, compared to 8.6 to 9.5 per 100,000 births in the four four-year cohorts preceding expanded screening. The increased rate of diagnosis was most apparent for the medium-chain and short-chain acyl-Co-A dehydrogenase deficiencies. Whether all children with disorders detected by MS-MS would have become symptomatic is uncertain hellip. The American Academy of Pediatrics has developed newborn screening fact sheets for 12 disorders, biotinidase deficiency, congenital adrenal hyperplasia, congenital hearing loss, congenital hypothyroidism, cystic fibrosis, galactosemia, homocystinuria, maple syrup urine disease, medium-chain acyl-coenzyme A dehydrogenase deficiency, PKU, sickle cell disease and other hemoglobinopathies, and tyrosinemia hellip. With the use of tandem mass spectrometry (MS-MS), the prevalence of a confirmed metabolic disorder detected by newborn screening is 1:4000 live births (about 12,500 diagnoses each year) in the United States. The most commonly diagnosed conditions are hearing loss, primary congenital hypothyroidism, cystic fibrosis, sickle cell disease, and medium-chain acyl-CoA dehydrogenase deficiencyrdquo. This review does not mention very long chain acylCoA dehydrogenase deficiency. Congenital Stationary Night Blindness According to Orphanet (a portal for rare diseases and orphan drugs), congenital stationary night blindness (CSNB) is an inherited retinal disorder that predominates on rods. It is a rare disease and 3 types of transmission can be found: (i) autosomal dominant, (ii) recessive, and (iii) X-linked recessive. The affection is heterogeneous. The only symptom is hemeralopia with a moderate loss of visual acuity. Both the funduscopy and visual field are normal. In recessive forms, the ldquobrdquo wave on the electroretinogramelectroretinography (ERG) is not found in the scotoscopic study, while the ldquoardquo wave is normal and increases with light intensity. In dominant forms, the ldquobrdquo wave is seen. Levels of rhodopsine are normal and regenerate normally. Signal transmission may be the affected function. There is no specific treatment for CSNB. orpha. netconsorcgi-binOCExp. phplngENampExpert215 . According to Genetic Home Reference, X-linked CSNB is a disorder of the retina. People with this condition typically have difficulty seeing in low light (night blindness). They also have other vision problems, including reduced acuity, high myopia, nystagmus, and strabismus. Color vision is typically not affected by this disorder. The visual problems associated with this condition are congenital. They tend to remain stable (stationary) over time. Researchers have identified 2 major types of X-linked CSNB: (i) the complete form, and (ii) the incomplete form. The types have very similar signs and symptoms. However, everyone with the complete form has night blindness, while not all people with the incomplete form have night blindness. The types are distinguished by their genetic cause and by the results of ERG. ghr. nlm. nih. govconditionx-linked-congenital-stationary-night-blindness . In general, the diagnosis of X-linked CSNB can be made by ophthalmologic examination (including ERG) and family history consistent with X-linked inheritance (Boycott et al, 2012) ncbi. nlm. nih. govbooksNBK1245 . According to a Medscape review on ldquoThe Genetics of Hereditary Retinopathies and Optic Neuropathiesrdquo (Iannaccone, 2005), CSNB can be inherited according to all Mendelian inheritance patterns 2 X-linked and 2 autosomal dominant genes have been cloned. In all types of CSNB, night vision is congenitally but non-progressively impaired and the retinal examination is normal. Most CSNB patients also have congenital nystagmus as the presenting sign, which can create a differential diagnostic challenge with Leber congenital amaurosis. Typically, patients with complete X-linked CSNB are also moderate-to-high myopes. The X-linked CSNB forms, which are the most common ones, all share an electronegative electroretinogram response similar to that seen in X-linked retinoschisis, and are distinguished in CSNB type 1 (also known as complete CSNB) and CSNB type 2 (incomplete CSNB) based on additional electroretinogram features, a distinction that has been confirmed at the genetic levelrdquo. medscapeviewarticle5017616 . Price et al (1988) reported that 7 of 8 patients presented initially or were followed for decreased acuity and nystagmus without complaints of night blindness. The diagnosis of CSNB was established with ERG and dark adaptation testing. They stated that careful electrodiagnostic testing is needed to provide accurate genetic counseling. Two patients showed pupillary constriction to darkness, which is a sign of retinal disease in young patients. Lorenz et al (1996) presented the clinical data of 2 families with X-linked incomplete CSNB previously undiagnosed ERG recordings in both families were suggestive of CSNB. The ERG of the obligate carrier was normal. In an attempt to distinguish between the complete and the incomplete type, and to identify further carrier signs, scotopic perimetry and dark adaptation were performed in both affected males and carriers. Scotopic perimetry tested the rod-mediated visual pathway in its spatial distribution. In affected males with non-recordable ERGs, scotopic perimetry and dark adaptation disclosed residual rod function indicating an incomplete type. In carriers, there was a sensitivity loss at 600 nm, which may be a new carrier sign. The authors concluded that correct diagnosis of the different forms of CSNB together with the identification of carriers is important for (i) genetic counseling, and (ii) linkage studies to identify the gene(s) for CSNB. Kim et al (2012) evaluated the frequency of negative waveform ERGs in a tertiary referral center. All patients who had an ERG performed at the electrophysiology clinic at Emory University from January 1999 through March 2008 were included in the study. Patients with b-wave amplitude less than or equal to a-wave amplitude during the dark-adapted bright flash recording, in at least 1 eye, were identified as having a negative ERG. Clinical information, such as age, gender, symptoms, best corrected visual acuity, and diagnoses were recorded for these patients when available. A total of 1,837 patients underwent ERG testing during the study period. Of those, 73 patients had a negative ERG, for a frequency of 4.0 . Within the adult (greater than or equal to 18 years of age) and pediatric populations, the frequencies of a negative ERG were 2.5 and 7.2 , respectively. Among the 73 cases, negative ERGs were more common among male than female patients, 6.7 versus 1.8 (p lt 0.0001). Negative ERGs were most common among male children and least common among female adults, 9.6 versus 1.1 , respectively, (p lt 0.0001). Overall in this group of patients, the most common diagnoses associated with a negative ERG were CSNB (n 29) and X-linked retinoschisis (XLRS, n 7). The authors concluded that the overall frequency of negative ERGs in this large retrospective review was 4.0 . Negative ERGs were most common among male children and least common among female adults. Despite the growing number of new diagnoses associated with negative ERGs, CSNB, and XLRS appear to be the most likely diagnoses for a pediatric patient who presents with a negative ERG. It is also interesting to note that in a recently completed clinical trial (last verified June 2012) of ldquoTreatment of Congenital Stationary Night Blindness with an Alga Containing High Dose of Beta Carotenerdquo, the selection criteria for participants of this trial do not include genetic testing. They included the following: clinicaltrials. govct2showNCT00569023 . Isolated rod response markedly reduced (less than 20 of normal) after 20 mins dark adaptation and improved by 50 after 2 hrs Negative maximal response (ldquoardquo wave to ldquobrdquo wave ratio less than 2) Retinal mid-peripheral white dots (more than 3,000 dots). Kumar et al (2009) noted that many independent prognostic markers have been identified for predicting survival and helping in the management of lung cancer cases. p53 protein over-expression and mutation have been the topic of numerous such publications. However, little is known about the role of anti-p53 antibodies as a prognostic marker in lung cancer. These investigators searched the MEDLINE database and the bibliographies of the retrieved manuscripts and reviews. The retrieved studies are grouped according to the cohort studied. Out of 179 citations retrieved, 17 met selection criteria. A total of 7 studies used only non-small-cell lung cancer (NSCLC) 4 studies used only small-cell lung cancer and 6 studies used the mixed cohort of both types of lung cancer. The studies varied in the concept design, cohort studied and the methodology. The prognostic role of anti-p53 antibodies in lung cancer remained contradictory and as some studies showed an association with poor prognosis, others showed a favorable association and still others showing no association what so ever. The frequency of detection of anti-p53 antibody was very low, highly specific with result being independent of the cohort studied. The authors concluded that adequate clinical trials, with optimized cohort, antigen and assay validation, are needed to address patients and physicians concerns regarding these associations. Ciancio et al (2011) stated that over-expression of the tumor suppressor gene p53 and the marker for cellular proliferation Ki67 in open lung biopsies are indicated as predictor factors of survival of patients with lung cancer. However, the prognostic value of p53 and Ki67 in fiberoptic bronchial biopsies (FBB) has not been fully investigated. These researchers evaluated p53 and Ki67 immunostaining in FBB from 19 with NSCLC (12 adenocarcinomas, 5 squamous cell carcinomas and 2 NSCLC-NOS). Fiberoptic bronchial biopsy specimens were fixed in formalin, embedded in paraffin, and immunostained using anti-p53 and anti-Ki67 antibodies. Slides were reviewed by 2 independent observers and classified as positive (ve) when the number of cells with stained nuclei exceeded 15 for p53 or when greater than 25 positive cells were observed throughout each section for Ki67. Positive (ve) immunostaining was found in 9 patients for p53 (47.37 ) and 8 patients for Ki67 (42.10 ). These investigators examined overall survival (OS) curves of the patients with Mantels log-rank test, both p53 - ve and Ki67 - ve patients had significantly higher survival rates than p53thinspthinspve (pthinspltthinsp0.005) and Ki67thinspthinspve (pthinspltthinsp0,0001), respectively. The authors concluded that the findings of this study suggested that negative immunostaining of fiberoptic bronchial biopsies for p53 and Ki67 could represent a better prognostic factor for patients with NSCLC. Mattioni et al (2013) noted that TP53 gene mutations can lead to the expression of a dysfunctional protein that in turn may enable genetically unstable cells to survive and change into malignant cells. Mutant p53 accumulates early in cells and can precociously induce circulating anti-p53 antibodies (p53Abs) in fact, p53 over-expression has been observed in pre-neoplastic lesions, such as bronchial dysplasia, and p53Abs have been found in patients with chronic obstructive pulmonary disease, before the diagnosis of lung and other tobacco-related tumors. These researchers performed a large prospective study, enrolling non-smokers, ex-smokers and smokers with or without the impairment of lung function, to analyze the incidence of serum p53Abs and the correlation with clinicopathologic features, in particular smoking habits and impairment of lung function, in order to investigate their possible role as early markers of the onset of lung cancer or other cancers. The p53Ab levels were evaluated by a specific ELISA in 675 subjects. Data showed that significant levels of serum p53Abs were present in 35 subjects (5.2 ) no difference was observed in the presence of p53Abs with regard to age and gender, while p53Abs correlated with the number of cigarettes smoked per day and packs-year. Furthermore, serum p53Abs were associated with the worst lung function impairment. The median p53Ab level in positive subjects was 3.5 unitsml (range of 1.2 to 65.3 unitsml). Only 15 positive subjects participated in the follow-up, again resulting positive for serum p53Abs, and no evidence of cancer was found in these patients. The authors concluded that the presence of serum p53Abs was found to be associated with smoking level and lung function impairment, both risk factors of cancer development. However, in this study these researchers did not observe the occurrence of lung cancer or other cancers in the follow-up of positive subjects, therefore they cannot directly correlate the presence of serum p53Abs with cancer risk. Lei et al (2013) stated that the diagnosis of lung cancer remains a clinical challenge. Many studies have assessed the diagnostic potential of anti-p53 antibody in lung cancer patients but with controversial results. These researchers summarized the overall diagnostic performance of anti-p53 antibody in lung cancer. Based on a comprehensive search of the PubMed and Embase, these investigators identified outcome data from all articles estimating diagnostic accuracy of anti-p53 antibody for lung cancer. A summary estimation for sensitivity, specificity, and other diagnostic indexes were pooled using a bivariate model. The overall measure of accuracy was calculated using summary receiver operating characteristic curve and the area under curve (AUC) was calculated. According to the inclusion criteria, a total of 16 studies with 4,414 subjects (2,249 lung cancers, 2,165 controls) were included. The summary estimates were: sensitivity 0.20 (95 confidence interval CI: 0.15 to 0.27), specificity 0.97 (95 CI: 0.95 to 0.98), positive likelihood ratio 6.64 (95 CI: 4.34 to 10.17), negative likelihood ratio 0.83 (95 CI: 0.77 to 0.89), diagnostic odds ratio 8.04 (95 CI: 5.05 to 12.79), the AUC was 0.84. Subgroup analysis suggested that anti-p53 antibody had a better diagnostic performance for small cell lung cancer than non-small cell lung cancer. The authors concluded that anti-p53 antibody can be an assistant marker in diagnosing lung cancer, but the low sensitivity limits its use as a screening tool for lung cancer. Moreover, they stated that further studies should be performed to confirm these findings. UpToDate reviews on ldquoOverview of the initial evaluation, treatment and prognosis of lung cancerrdquo (Midthun, 2014a) and ldquoOverview of the risk factors, pathology, and clinical manifestations of lung cancerrdquo (Midthun, 2014b) do not mention anti-p53 and anti-MAPKAPK3 as biomarkers. Also, an UpToDate review on ldquoScreening for lung cancerrdquo (Deffebach and, Humphrey, 2014) does not mention anti-MAPKAPK3 as a biomarker. Moreover, it lists ldquoImmunostaining or molecular analysis of sputum for tumor markers. As examples, p16 ink4a promoter hypermethylation and p53 mutations have been shown to occur in chronic smokers before there is clinical evidence of neoplasiardquo as one of the technologies under investigation. Furthermore, National Comprehensive Cancer Networkrsquos clinical practice guideline on ldquoNon-small cell lung cancerrdquo (Version 4.2014) does not mention anti-p53 and anti-MAPKAPK3 as biomarkers. SLCO1B1 testing has been proposed to predict risk of statin-induced myopathy. Talameh and, Kitzmiller (2014) noted that statins are the most commonly prescribed drugs in the United States and are extremely effective in reducing major cardiovascular events in the millions of Americans with hyperlipidemia. However, many patients (up to 25 ) cannot tolerate or discontinue statin therapy due to statin-induced myopathy (SIM). Patients will continue to experience SIM at unacceptably high rates or experience unnecessary cardiovascular events (as a result of discontinuing or decreasing their statin therapy) until strategies for predicting or mitigating SIM are identified. A promising strategy for predicting or mitigating SIM is pharmacogenetic testing, particularly of pharmacokinetic genetic variants as SIM is related to statin exposure. Data are emerging on the association between pharmacokinetic genetic variants and SIM. A current, critical evaluation of the literature on pharmacokinetic genetic variants and SIM for potential translation to clinical practice is lacking. This review focused specifically on pharmacokinetic genetic variants and their association with SIM clinical outcomes. These investigators also discussed future directions, specific to the research on pharmacokinetic genetic variants, which could speed the translation into clinical practice. For simvastatin, these researchers did not find sufficient evidence to support the clinical translation of pharmacokinetic genetic variants other than SLCO1B1. However, SLCO1B1 may also be clinically relevant for pravastatin - and pitavastatin-induced myopathy, but additional studies assessing SIM clinical outcome are needed. CYP2D64 may be clinically relevant for atorvastatin-induced myopathy, but mechanistic studies are needed. The authors concluded that future research efforts need to incorporate statin-specific analyses, multi-variant analyses, and a standard definition of SIM. As the use of statins is extremely common and SIM continues to occur in a significant number of patients, future research investments in pharmacokinetic genetic variants have the potential to make a profound impact on public health. Kuhlenbaumer and colleagues (2014) provided a comprehensive meta-analysis and review of the clinical and molecular genetics of essential tremor (ET). Studies were reviewed from the literature. Linkage studies were analyzed applying criteria used for monogenic disorders. For association studies, allele counts were extracted and allelic association calculated whenever possible. A meta-analysis was performed for genetic markers investigated in more than 3 studies. Linkage studies have shown conclusive results in a single family only for the locus ETM2 (essential tremor monogenetic locus 2, logarithm of odds score lod greater than 3.3). None of the 3 ETM loci had been confirmed independently with a lod score greater than 2.0 in a single family. A mutation in the FUS gene (fused in sarcoma) was found in one ET family by exome sequencing. Two genome-wide association studies demonstrated association between variants in the LINGO1 gene (leucine-rich repeat and Ig domain containing 1) and the SLC1A2 gene (solute carrier family 1 member 2) and ET, respectively. This meta-analysis confirmed the association of rs9652490 in LINGO1 with ET. Candidate gene mutation analysis and association studies have not identified reproducible associations. The authors concluded that problems of genetic studies of ET are caused by the lack of stringent diagnostic criteria, small sample sizes, lack of biomarkers, a high phenocopy rate, evidence for non-Mendelian inheritance, and high locus heterogeneity in presumably monogenic ET. They stated that these issues could be resolved by better worldwide cooperation and the use of novel genetic techniques. Taylor et al (2014) noted that mitochondrial disorders have emerged as a common cause of inherited disease, but their diagnosis remains challenging. Multiple respiratory chain complex defects are particularly difficult to diagnose at the molecular level because of the massive number of nuclear genes potentially involved in intra-mitochondrial protein synthesis, with many not yet linked to human disease. These researchers determine the molecular basis of multiple respiratory chain complex deficiencies. They studied 53 patients referred to 2 national centers in the United Kingdom and Germany between 2005 and 2012. All had biochemical evidence of multiple respiratory chain complex defects but no primary pathogenic mitochondrial DNA mutation. Whole-exome sequencing was performed using 62-Mb exome enrichment, followed by variant prioritization using bioinformatic prediction tools, variant validation by Sanger sequencing, and segregation of the variant with the disease phenotype in the family. Presumptive causal variants were identified in 28 patients (53 95 CI: 39 to 67 ) and possible causal variants were identified in 4 (8 95 CI: 2 to 18 ). Together these accounted for 32 patients (60 95 CI: 46 to 74 ) and involved 18 different genes. These included recurrent mutations in RMND1, AARS2, and MTO1, each on a haplotype background consistent with a shared founder allele, and potential novel mutations in 4 possible mitochondrial disease genes (VARS2, GARS, FLAD1, and PTCD1). Distinguishing clinical features included deafness and renal involvement associated with RMND1 and cardiomyopathy with AARS2 and MTO1. However, atypical clinical features were present in some patients, including normal liver function and Leigh syndrome (subacute necrotizing encephalomyelopathy) seen in association with TRMU mutations and no cardiomyopathy with founder SCO2 mutations. It was not possible to confidently identify the underlying genetic basis in 21 patients (40 95 CI: 26 to 54 ). The authors concluded that exome sequencing enhanced the ability to identify potential nuclear gene mutations in patients with biochemically defined defects affecting multiple mitochondrial respiratory chain complexes. Moreover, they stated that additional study is needed in independent patient populations to determine the utility of this approach in comparison with traditional diagnostic methods. UpToDate reviews on Diagnostic evaluation of women with suspected breast cancer (Esserman and Joe, 2014a), Clinical features, diagnosis, and staging of newly diagnosed breast cancer (Esserman and Joe, 2014b), and Clinical manifestations and diagnosis of a palpable breast mass (Sabel, 2014) do not mention RAD51C gene testing. Furthermore, NCCNrsquos clinical practice guidelines on Breast cancer (Version 3.2014) and Ovarian cancer including fallopian tube cancer and primary peritoneal cancer (Version 3.2014) do not mention RAD51C gene testing. Yang et al (2103) stated that osteoporosis is characterized by low bone mineral density (BMD), a highly heritable trait that is determined, in part, by the actions and interactions of multiple genes. Although an increasing number of genes have been identified to have independent effects on BMD, few studies have been performed to identify genes that interact with one another to affect BMD. Kim et al (2013) noted that BMD loci were reported in Caucasian genome-wide association studies (GWAS). These researchers investigated the association between 59 known BMD loci (200 suggestive SNPs) and DXA-derived BMD in East Asian population with respect to sex and site specificity. They also identified 4 novel BMD candidate loci from the suggestive SNPs. A total of 2,729 unrelated Korean individuals from a population-based cohort were analyzed. The authors selected 747 single-nucleotide polymorphisms (SNPs). These markers included 547 SNPs from 59 loci with genome-wide significance (GWS, p value less than 5thinsptimesthinsp10(-8)) levels and 200 suggestive SNPs that showed weaker BMD association with p value less than 5thinsptimesthinsp10(-5). After quality control, 535 GWS SNPs and 182 suggestive SNPs were included in the replication analysis. Of the 535 GWS SNPs, 276 from 25 loci were replicated (pthinspltthinsp0.05) in the Korean population with 51.6 replication rate. Of the 182 suggestive variants, 16 were replicated (pthinspltthinsp0.05, 8.8 of replication rate), and 5 reached a significant combined p value (less than 7.0thinsptimesthinsp10(-5), 0.05717 SNPs, corrected for multiple testing). Two markers (rs11711157, rs3732477) are for the same signal near the gene CPN2 (carboxypeptidase N, polypeptide 2). The other variants, rs6436440 and rs2291296, were located in the genes AP1S3 (adaptor-related protein complex 1, sigma 3 subunit) and RARB (retinoic acid receptor, beta). The authors concluded that these results illustrated ethnic differences in BMD susceptibility genes and underscored the need for further genetic studies in each ethnic group. The authors were also able to replicate some SNPs with suggestive associations. These SNPs may be BMD-related genetic markers and should be further investigated. The Institute for Clinical Systems Improvementrsquos clinical guideline on ldquoDiagnosis and treatment of osteoporosisrdquo (Florence et al, 2013) did not mention the use of genetic testing. Furthermore, an UpToDate review on ldquoPathogenesis of osteoporosisrdquo (Manolagas, 2104) states that ldquoGenetics -- A portion of the variation in BMD among humans has a genetic basis. Genome-wide association studies have so far identified approximately 80 genetic loci that influence BMD. A remarkable number of these loci are involved in some aspect of Wnt of Wntbeta-catenin signaling, the receptor activator of nuclear factor kappa-B (RANK)RANK ligand (RANKL)osteoprotegerin (OPG) axis, or in mesenchymal cell differentiation. The contribution of individual genetic variants, however, is small, and of the total variance in BMD only a small percentage is explained by variants of genes identified. To date, there are no genome-wide association studies on fracture or BMD loss. Therefore, it remains unclear whether the same genes that determine BMD also affect the rate of bone loss with advancing age or the risk of fracturesrdquo. Thoracic Aortic Aneurysms and Dissections A number of conditions are associated with aortic dysfunction and dilation, including Marfan syndrome, Loeys Dietz syndrome, Ehler Danlos syndrome type IV, Turner syndrome, and arterial tortuosity syndrome. Ehlers-Danlos syndrome (EDS) is a group of inherited disorders affecting the connective tissues. Common characteristics of EDS include easy bruising, skin hyperelasticity or laxity, joint hypermobility and tissue weakness. EDS is categorized by type: Classic (EDS types I and II) Hypermobility (EDS type III) Vascular (EDS type IV) kyphoscoliosis (EDS type VI) Arthrochalasia (EDS types VIIA amp B) Dermatosparaxis (EDS type VIIC). Classic, hypermobility and vascular type occur more frequently than the other types. Other more rare forms include spondylocheirodysplasia EDS and musculocontractural EDS additional rare variants of EDS have also been described. Acrogeria refers to looseness and wrinkling of the skin of the hands and feet that is caused by loss of subcutaneous fat and collagen and gives the appearance of premature aging. Dermatosparaxis is an inherited defect in collagen synthesis caused by a deficiency of procollagen peptidase. Results in fragility, hyperelasticity and laxity of the skin. Musculocontractural is a rare form of EDS with the following characteristics: distinctive craniofacial dysmorphism, congenital contractures of thumbs and fingers, clubfeet, severe kyphoscoliosis, muscular hypotonia, hyperextensible thin skin with easy bruisability and atrophic scarring, wrinkled palms, joint hypermobility and ocular involvement. Spondylocheirodysplasia is a rare form of EDS with the following clinical features: postnatal growth retardation, moderate short stature, protuberant eyes with bluish sclerae, hands with finely wrinkled palms, atrophy of the thenar muscles and tapering fingers. Ehlers-Danlos syndrome type IV (EDS type IV) is characterized by thin, translucent skin easy bruising characteristic facial appearance and arterial, intestinal, andor uterine fragility (Pepin amp Byers, 2011). The diagnosis of EDS type IV is based on clinical findings and confirmed by identification of a causative mutation in COL3A1. EDS type IV is inherited in an autosomal dominant manner. Arterial tortuosity syndrome (ATS) is characterized by severe and widespread arterial tortuosity of the aorta and middle-sized arteries (with an increased risk of aneurysms and dissections) and focal and widespread stenosis which can involve the aorta andor pulmonary arteries (Callewaert, et al. 2014). The diagnosis of ATS is established in a proband with generalized arterial tortuosity and biallelic (homozygous or compound heterozygous) pathogenic variants in SLC2A10. ATS is inherited in an autosomal recessive manner. Loeys-Dietz syndrome (LDS) is an inherited connective tissue disorder characterized by aortic aneurysms and other blood vessel abnormalities. Mutations in either the TGFBR1 or TGFBR2 gene can cause LDS. LDS is characterized by vascular findings (cerebral, thoracic, and abdominal arterial aneurysms andor dissections) and skeletal manifestations (pectus excavatum or pectus carinatum, scoliosis, joint laxity, arachnodactyly, talipes equinovarus) (Loeys amp Dietz, 2014). The diagnosis of LDS is based on characteristic clinical findings in the proband and family members and molecular genetic testing of TGFBR1, TGFBR2, SMAD3, and TGFB2. LDS is inherited in an autosomal dominant manner. Marfan syndrome is a genetic disorder in which the bodyrsquos connective tissue is abnormal. The disorder affects many parts of the body primarily, the connective tissue of the heart, blood vessels, eyes, bones, lungs and covering of the spinal cord. Marfan syndrome diagnosis relies on a set of strict major and minor criteria known as the Ghent nosology, a scoring system developed to aid in the clinical diagnosis of Marfan syndrome. Two fundamental features of the Ghent nosology are aortic root aneurysm and ectopia lentis. In the absence of a family history of Marfan syndrome, the presence of aortic root aneurysm and ectopia lentis are sufficient to diagnose Marfan syndrome. Without these two conditions, or a combination of systemic features described in the Ghent nosology, genetic testing may be required to confirm a diagnosis. Even with the availability of genetic testing, establishing a diagnosis of Marfan syndrome depends heavily upon significant clinical findings. Marfan syndrome is a systemic disorder of connective tissue with a high degree of clinical variability (Dietz, 2014). Cardinal manifestations involve the ocular, skeletal, and cardiovascular systems. Cardiovascular manifestations include dilatation of the aorta at the level of the sinuses of Valsalva, a predisposition for aortic tear and rupture, mitral valve prolapse with or without regurgitation, tricuspid valve prolapse, and enlargement of the proximal pulmonary artery. Marfan syndrome is a clinical diagnosis based on family history and the observation of characteristic findings in multiple organ systems. Marfan syndrome is caused by mutation of FBN1. The sensitivity of molecular genetic testing of FBN1 is substantial yet incomplete for unknown reasons it may be explained by atypical location or character of FBN1 pathogenic variants in some individuals (e. g. large deletions or promoter mutations) or to locus heterogeneity. Marfan syndrome is inherited in an autosomal dominant manner. A variety of conditions including, but not limited to, Loeys-Dietz syndrome (LDS) and familial thoracic aortic aneurysm and dissection (TAAD) have overlapping clinical manifestations of Marfan syndrome and should be distinguished from Marfan syndrome with documentation of discriminating features, biochemical andor genetic testing, when indicated. Familial TAAD is an inherited disorder that causes the aorta to weaken and stretch. Mutations in any of several genes are associated with familial TAAD. Predictive genetic testing for LDS, Marfan syndrome and TAAD may be sought for at-risk asymptomatic or presymptomatic family members to detect mutations in the genes known to cause these disorders to determine if the individual will develop the condition. Several labs offer multigene panels, often using next-generation sequencing (NGS) for familial TAAD, LDS and Marfan syndrome that include not only the FBN1 gene but also a number of other genes associated with disorders featuring aortic aneurysms and dissections. With the introduction of NGS, laboratories can simultaneously analyze numerous genes reportedly associated with Marfan syndrome and related conditions. Examples include: Marfan Syndrome and Aortic Aneurysms (MarfanAA) Test Marfan SyndromeThoracic Aortic Aneurysm and Dissection (TAAD) and Related Disorders Test and TAADNEXT. Guidelines from the American College of Cardiology (Hiratzka, et al. 2010) state, if a mutant gene (FBN1, TGFBR1, TGFBR2, COL3A1, ACTA2, MYH11) associated with aortic aneurysm andor dissection is identified in a patient, first-degree relatives should undergo counseling andtesting. Then, only the relatives with the genetic mutation should undergo aortic imaging. Clinical laboratories may offer a multi-gene Marfan syndromeLoeys-Dietz syndromefamilial thoracic aortic aneurysms and dissections panel that includes FBN1 as well as a number of other genes associated with disorders that include aortic aneurysms and dissections (Dietz, 2014). These panels vary by methods used and genes included thus, the ability of a panel to detect a pathogenic variant or pathogenic variants in any given individual also varies. In most circumstances a comprehensive clinical evaluation and imaging studies will point to a specific diagnosis (or subset of diagnoses) that has the highest probability, and thus should be pursued first for molecular confirmation. In the absence of such hypothesis-driven testing, there is an increased risk of erroneous interpretation of variants of uncertain significance when multi-gene panels are applied, especially if the physician requesting testing is not familiar with the specific diagnoses andor genes under consideration. Mitochondrial Genome Sequencing Mitochondria are tiny organelles housed in nearly every cell in the body and are responsible for creating cellular energy. Mitochondrial disorders are chronic, genetic conditions that can be inherited and occur when mitochondria fail to produce sufficient energy for the body to function. Mitochondrial disorders can be caused by mutations in DNA (nuclear DNA) or in DNA contained in the mitochondria (mitochondrial DNA mtDNA). Mitochondrial diseases are a clinically heterogeneous group of disorders that arise as a result of dysfunction of the mitochondrial respiratory chain (Chinnery, 2014). They can be caused by mutation of genes encoded by either nuclear DNA or mitochondrial DNA (mtDNA). Mitochondrial disorders may be caused by mutation of an mtDNA gene or mutation of a nuclear gene. Mitochondrial DNA variants are transmitted by maternal inheritance (mitochondrial inheritance). Nuclear gene variants may be inherited in an autosomal recessive, autosomal dominant, or X-linked manner. Mitochondrial disorders can occur at any age. Symptoms can involve one or more organs with varying degrees of severity. Some individuals display features that fall into distinct syndromes such as chronic progressive external ophthalmoplegia (CPEO), KearnsSayre syndrome (KSS), Leber hereditary optic neuropathy (LHON), Leigh syndrome (LS), mitochondrial encephalomyopathy with lactic acidosis and strokelike episodes (MELAS), myoclonic epilepsy with raggedred fibers (MERRF) and neurogenic weakness with ataxia and retinitis pigmentosa (NARP). Individuals may also present with overlapping range of disease, such as mitochondrial recessive ataxia syndrome (MIRAS). Often affected individuals do not fit into a specific category. Common symptoms include ataxia, cardiomyopathy, diabetes mellitus, exercise intolerance, external ophthalmoplegia, fluctuating encephalopathy, myopathy, optic atrophy, pigmentary retinopathy, ptosis, seizures, sensorineural deafness and spasticity. Treatment of mitochondrial disorders is largely supportive. Molecular genetic testing may be carried out on genomic DNA extracted from blood (suspected nuclear DNA mutations and some mtDNA mutations) or on genomic DNA extracted from muscle (suspected mtDNA mutations) (Chinnery, 2014). Studies for mtDNA mutations are usually carried out on skeletal muscle DNA because a pathogenic mtDNA variant may not be detected in DNA extracted from blood. Genetic testing panels have been proposed to aid in the diagnosis of individuals with suspected mitochondrial disorders and may involve point mutations analysis, Approaches to molecular genetic testing of a proband to consider are serial testing of single genes, deletionduplication analysis, multi-gene panel testing (simultaneous testing of multiple genes), and genomic testing (e. g. sequencing the entire mitochondrial genome whole-exome sequencing or whole-genome sequencing to identify mutation of a nuclear gene) (Chinnery, 2014) (eg, Combined Mito Genome Plus Mito Nuclear Gene Panel, Comprehensive Mitochondrial Nuclear Gene Panel, MitoMet Plus aCGH Analysis). In contrast to genomic testing, serial testing of single genes and multi-gene panel testing rely on the clinician developing a hypothesis about which specific gene or set of genes to test (Chinnery, 2014). Hypotheses may be based on (1) mode of inheritance, (2) distinguishing clinical features, andor (3) other discriminating features. The potential role of genomic testing is where single-gene testing (andor use of a multi-gene panel) has not confirmed a diagnosis in an individual with features of a mitochondrial disorder. Such testing includes whole-exome sequencing, whole-genome sequencing, and whole mitochondrial sequencing. False negative rates vary by genomic region therefore, genomic testing may not be as accurate as targeted single gene testing or multi-gene molecular genetic testing panels (Chinnery, 2014). Most laboratories confirm positive results using a second, well-established method. Certain DNA variants may not be detectible through genomic testing, such as large deletions or duplications (gt8-10 bp in length), triplet repeat expansions, and epigenetic alterations. Exome sequencing has shown promise in defining the genetic basis of mitochondrial disorders caused by mutation of nuclear genes. To determine the molecular basis of multiple respiratory chain complex deficiencies, Taylor, et al. (2014) studied 53 patients referred to 2 national centers in the United Kingdom and Germany between 2005 and 2012. All subjects had evidence of histochemical andor biochemical diagnosis of mitochondrial disease in a clinically affected tissue (skeletal muscle, liver, or heart) confirming decreased activities of multiple respiratory chain complexes based on published criteria. Subjects had no large-scale mtDNA rearrangements, mtDNA depletion, and mtDNA point mutations, in persons in whom decreased levels of mtDNA were confirmed in muscle (mtDNA depletion). In those with congenital structural abnormalities, major nuclear gene rearrangements were excluded by comparative genomic hybridization arrays. Whole-exome sequencing was performed using 62-Mb exome enrichment, followed by variant prioritization using bioinformatics prediction tools, variant validation by Sanger sequencing, and segregation of the variant with the disease phenotype in the family. Presumptive causal variants were identified in 28 patients (53 95 CI, 39-67) and possible causal variants were identified in 4 (8 95 CI, 2-18). Together these accounted for 32 patients (60 95 CI, 46-74) and involved 18 different genes. These included recurrent mutations in RMND1, AARS2, and MTO1, each on a haplotype background consistent with a shared founder allele, and potential novel mutations in 4 possible mitochondrial disease genes (VARS2, GARS, FLAD1, and PTCD1). Distinguishing clinical features included deafness and renal involvement associated with RMND1 and cardiomyopathy with AARS2 and MTO1. However, atypical clinical features were present in some patients, including normal liver function and Leigh syndrome (subacute necrotizing encephalomyelopathy) seen in association with TRMU mutations and no cardiomyopathy with founder SCO2 mutations. It was not possible to confidently identify the underlying genetic basis in 21 patients (40 95 CI, 26-54). The authors stated that additional study is required in independent patient populations to determine the utility of this approach in comparison with traditional diagnostic methods. Genetic Test Panels for Nonsyndromic Hereditary Hearing Loss There is limited published evidence for the clinical validity and clinical utility of specific genetic test panels for nonsyndromic hearing loss. A number of test panels are currently available commercially (e. g. OtoScope, OtoGenome, OtoSeq. The genes included in these test panels differ siginficantly, and there is limited published information on their clinical utlity and clinical validity. Hearing loss may be classified as either syndromic or nonsyndromic. Nonsyndromic hearing loss is defined by the absence of malformations of the external ear or other medical problems in the affected individual. With the syndromic hearing loss, malformations of the external ear andor other medical problems are present. Approximately 50 of nonsyndromic hearing loss can be attributed to a genetic cause, and may be inherited in an autosomal recessive (70 of patients), autosomal dominant (20 of patients), with mitochondrial, X-linked and other genetic causes making up the remainder of patients. Sequence variants in approximately 60 genes and some micro-RNAs have been associated with causing nonsyndromic hearing loss. Micro-RNAs are post-transcriptional regulators that consist of 20-25 nucleotides. Usher and Pendred syndromes are the most common forms of the approximately 400 forms of syndromic hearing loss. Both have autosomal recessive inheritance. Usher syndrome is characterized by sensorineural hearing loss and later development of retinitis pigmentosa. Usher syndrome has three forms that vary by the profundity of hearing loss and whether vestibular dysfunction is present. The three types of Usher syndrome have been associated with sequence variants in 9 different genes. Pendred syndrome is characterized by congenital hearing loss and euthyroid goiter that develops in the second or third decade of life. Pendred syndrome is associated with sequence variants in the SLC26A4 gene. Some of the genes associated with Usher and Pendred syndromes may also be associated with nonsyndromic hearing loss. The OtoSCOPE test has been developed to make use of next generation sequencing capabilities, to simultaneously test for sequence variants in 66 genes associated with nonsyndromic hearing loss as well as both Usher and Pendred syndromes. The claimed advantage of the OtoScope test is that simultaneous analysis of the 66 genes included in the test may reduce the time and cost compared with genetic testing of individual genes. The OtoSCOPE genetic testing for hereditary hearing loss is considered investigationalexperimental because there is inadequate evidence in the peer-reviewed published clinical literature regarding its effectiveness. The OtoGenome Test is a next-generation sequencing (NGS) assay that covers all 73 known genes for non-syndromic hearing loss. There is insufficient evidence of the performance and clinical utility of this test panel. Published evidence for the OtoSeq test panel includes an epidemiological study of the use of a component of the OtoSeq panel in identifying certain hearing loss genes in 34 Pakistani families (Shahzad, et al. 2013). In addition, there is a prelimary study of the performance of the OtoSeq in 8 individuals with hearing loss, comparing the results of Next Generation Sequencing with Sanger Sequencing (Sivakumaran, et al. 2013). There is insufficient published information about the performance and clinical utility of this test panel. X-Linked Intellectual Disability Panels Intellectual disability (ID, formerly called mental retardation) is a developmental brain disorder commonly defined by an IQ below 70 and limitations in both intellectual functioning and adaptive behavior (Piton, et al. 2013). ID can originate from environmental causes or genetic anomalies, and its incidence in children is estimated to be of 1ndash2. ID is more common in males than females in the population (the male-to-female ratio is 1.3ndash1.4 to 1), assumed to be due to mutations on the X chromosome. Impaired mental functioning occurs as an isolated feature or as part of many X-linked syndromes (McKusick, et al. 2010). ID that is not associated with other distinguishing features is referred to as nonspecific or lsquononsyndromic. rsquo X-linked intellectual disability (XLID) is a genetically heterogeneous disorder with more than 100 genes known to date (Tzchach, et al. 2015). Fragile X syndrome remains the most common XLMR gene discovered so far (Raymond, 2006). FMR1 is a target of the unstable expansion mutation responsible for fragile X syndrome and accounts for about 1ndash2 of all ID cases. Half of the known genes carrying mutations responsible for XLID are associated with syndromic forms (i. e. ID associated with defined clinical or metabolic manifestations), which facilitates the identification of causative mutations in the same gene because unrelated probands with comparable phenotypes can be more easily matched. The other half of known genes carrying mutations responsible for XLID appear to be associated with nonsyndromic or paucisyndromic forms. Next-generation sequencing panels have been developed to identify mutations associated with XLID. However, little has been published on their analytic validity, clinical validity and clinical utility. Tzschach, et al. (2015) performed targeted enrichment and next-generation sequencing of 107 XLID genes in a cohort of 150 male patients. One hundred patients had sporadic intellectual disability, and 50 patients had a family history suggestive of XLID. The investigators also analyzed a sporadic female patient with severe ID and epilepsy because she had strongly skewed X-inactivation. Target enrichment and high parallel sequencing allowed a diagnostic coverage of gt10 reads for approximately 96 of all coding bases of the XLID genes at a mean coverage of 124 reads. The investigators reported finding 18 pathogenic variants in 13 XLID genes (AP1S2, ATRX, CUL4B, DLG3, IQSEC2, KDM5C, MED12, OPHN1, SLC9A6, SMC1A, UBE2A, UPF3B and ZDHHC9) among the 150 male patients. Thirteen pathogenic variants were present in the group of 50 familial patients (26), and 5 pathogenic variants among the 100 sporadic patients (5). Systematic gene dosage analysis for low coverage exons detected one pathogenic hemizygous deletion. An IQSEC2 nonsense variant was detected in the female ID patient, providing further evidence for a role of this gene in encephalopathy in females. The investigators noted that skewed X-inactivation was more frequently observed in mothers with pathogenic variants compared with those without known X-linked defects. The investigators concluded that the mutation rate in the cohort of sporadic patients corroborates previous estimates of 5-10 for X-chromosomal defects in male ID patients. Piton, et al. (2013) used data from a large-scale sequencing project to question the implication of XLID in several of the genes proposed to be involved in XLID. The authors stated that mutations causing monogenic XLID have now been reported in over 100 genes, most of which are commonly included in XLID diagnostic gene panels. Nonetheless, the boundary between true mutations and rare non-disease-causing variants often remains elusive. The authors stated that sequencing of a large number of control X chromosomes, required for avoiding false-positive results, was not systematically possible in the past. Such information is now available thanks to large-scale sequencing projects such as the National Heart, Lung, and Blood (NHLBI) Exome Sequencing Project, which provides variation information on 10,563 X chromosomes from the general population. The authors used this NHLBI cohort to systematically reassess the implication of 106 genes proposed to be involved in monogenic forms of XLID. Based on this reassessment, the authors particularly questioned the implication in XLID of ten of them (AGTR2, MAGT1, ZNF674, SRPX2, ATP6AP2, ARHGEF6, NXF5, ZCCHC12, ZNF41, and ZNF81), in which truncating variants or previously published mutations are observed at a relatively high frequency within this cohort. The authors also highlighted 15 other genes (CCDC22, CLIC2, CNKSR2, FRMPD4, HCFC1, IGBP1, KIAA2022, KLF8, MAOA, NAA10, NLGN3, RPL10, SHROOM4, ZDHHC15, and ZNF261) for which replication studies are warranted. The authors proposed that similar reassessment of reported mutations (and genes) with the use of data from large-scale human exome sequencing would be relevant for a wide range of other genetic diseases. Angelman and Prader-Willi syndromes Angelman syndrome (AS) is a neurogenic disorder characterized by developmental delay, lack of speech, seizures and walking and balance disorders. Prader-Willi syndrome (PWS) is a genetic disorder characterized by short stature, intellectual delay, weak muscle tone (hypotonia), hypogonadism and an uncontrolled appetite that leads to life-threatening obesity. The diagnosis of AS or PWS can be established through a variety of biochemical and genetic tests including DNA methylation analysis, deletionduplication analysis, fluorescent in situ hybridization (FISH), chromosomal microarray (CMA), uniparental disomy (UPD) and imprinting defect (ID) studies. DNA Methylation is a biochemical process in which a strand of DNA is modified after it is replicated. DeletionDuplication Analysis is laboratory testing that identifies an absence of a segment of DNA (deletion) andor the presence of an extra segment of DNA (duplication) in a coding region. Fluorescence in Situ Hybridization (FISH) is a laboratory technique used to detect small deletions or rearrangements in chromosomes. FISH may be used in the diagnosis and prognosis of cancer and it is also utilized for the evaluation of microdeletion syndromes, such as Angelman syndrome, Prader-Willi syndrome and velocardiofacial syndrome. Uniparental Disomy (UPD) refers to the situation in which both members of a chromosome pair or segments of a chromosome pair are inherited from one parent and neither is inherited from the other parent. UPD can result in an abnormal phenotype in some cases. Kaler (2010) stated that for children with low serum copper (0 to 55 microgdL) and low serum ceruloplamin (10 to 160 mgL) concentrations, direct sequence analysis of the ATP7A coding region and flanking intron sequences detects about 80 of mutations, and deletionduplication analysis can be used to detect deletion of an ATP7A exon, multiple exons, or the whole gene in about 15 of affected patients. Celiac disease is an immune disorder in which an individual is unable to tolerate gluten, a protein found in wheat, rye and barley and sometimes in products such as vitamin supplements and some medications. The diagnosis of celiac disease is based on the biopsy and histopathologic examination of the small intestine. Blood tests may be used to choose individuals for biopsy and to aid in diagnosis. Genetic testing, which may also be referred to as human leukocyte antigen (HLA) typing for celiac disease, may be ordered if results from these tests are inconclusive. The National Comprehensive Cancer Networkrsquos clinical practice guideline on ldquoGastric cancerrdquo (Version 3.2015) states the following: Genetic testing for CDH1 mutations should be considered when any of the following criteria is met: 2 gastric cases in a family, 1 confirmed diffuse gastric cancer (DGC) diagnosed before age 50 years or 3 confirmed cases of DGC in 1st - or 2nd-degree relatives independent of age or DGC diagnosed before age 40 years without a family history or Personal or family history of DGC and lobular breast cancer, 1 diagnosed before age 50 years. Left Ventricle Non-Compaction An UpToDate review on ldquoIsolated left ventricular noncompactionrdquo (Connolly and Attenhofer-Jost, 2015) states that ldquoAlthough the 2009 HFSA guideline suggests genetic testing for the one most clearly affected person in a family to facilitate family screening and management, we do not routinely recommend genetic studies in patients and families with LVNCrdquo. Polycystic Liver Disease In a review on ldquoDiagnosis and management of polycystic liver diseaserdquo, Gevers and Drenth (2013) stated that ldquomutation analysis for PCLD (PRKCSH and SEC63) is rarely performed in routine clinical practice, as it is not needed for clinical decision-making for these patientsrdquo. Cnossen et al (2015) stated that isolated autosomal dominant polycystic liver disease (ADPLD) is a Mendelian disorder. Heterozygous PRKCSH (where PRKCSH is protein kinase C substrate 80K-H (80 kDa protein, heavy chain MIM177060) mutations are the most frequent cause. Routine molecular testing using Sanger sequencing identifies pathogenic variants in the PRKCSH (15 ) and SEC63 (where SEC63 is Saccharomyces cerevisiae homolog 63 (MIM608648) 6 ) genes, but about approximately 80 of patients meeting the clinical ADPLD criteria carry no PRKCSH or SEC63 mutation. Cyst tissue often shows somatic deletions with loss of heterozygosity that was recently recognized as a general mechanism in ADPLD. These researchers hypothesized that germline deletions in the PRKCSH gene may be responsible for hepatic cystogenesis in a significant number of mutation-negative ADPLD patients. In this study, these investigators designed a multiplex ligation-dependent probe amplification (MLPA) assay to screen for deletions of PRKCSH exons. Genomic DNA from 60 patients with an ADPLD phenotype was included MLPA analysis detected no exon deletions in mutation-negative ADPLD patients. The authors concluded that large copy number variations on germline level were not present in patients with a clinical diagnosis of ADPLD. They stated that MLPA analysis of the PRKCSH gene should not be considered as a diagnostic method to explain hepatic cystogenesis. Furthermore, an UpToDate review on ldquoDiagnosis and management of cystic lesions of the liverrdquo (Regev and Reddy, 2015) does not mention genetic testing as a management tool. MTHFR Genetic Testing for Risk Assessment of Hereditary Thrombophilia Variations in the MTHFR gene have been studied as risk factors for numerous conditions, including cardiovascular disease, thrombophilia, stroke, hypertension and pregnancy-related complications however, its role remains unclear. Coriu and colleagues (2014) noted that pregnancy is a normal physiological state that predisposes to thrombosis, determined by hormonal changes in the body. These changes occur in the blood flow (venous stasis), changes in the vascular wall (hypotonia, endothelial lesion) and changes in the coagulation factors (increased levels of factor VII, factor VIII, factor X, von Willebrand factor) and decreased activity levels of natural anti-coagulants (protein C, protein S). In this retrospective study, these researchers examined a possible association between thrombosis and inherited thrombophilia in pregnant women. A total of 151 pregnant women with a history of complicated pregnancy: maternal thrombosis and placental vascular pathology (intra-uterine growth restriction, pre-eclampsia, recurrent pregnancy loss), who were admitted in the authorsrsquo hospital during the period January 2010 to July 2014 were included in this study. These investigators performed genetic analyses to detect the factor V Leiden mutation, the G20210A mutation in the prothrombin gene, the C677T mutation and the A1298C mutation in methylenetetrahydrofolate reductase (MTHFR) gene. The risk of thrombosis in patients with factor V Leiden is 2.66 times higher than the patients negative for this mutation (OR 2.66 95 CI: 0.96 to 7.37 p 0.059). The authors did not find any statistical association with mutations in the MTHFR gene. Pregnant women with a family history of thrombosis presented a 2.18-fold higher risk of thrombosis (OR 2.18, CI: 0.9 to 5.26 p 0.085). Of 151 pregnant women, thrombotic events occurred in 24 patients: deep vein thrombosis, pulmonary embolism, cerebral venous sinus thrombosis and ischemic stroke. The occurrence of thrombotic events was identified in the last trimester of pregnancy, but especially post-partum. An UpToDate review on ldquoScreening for inherited thrombophilia in asymptomatic individualsrdquo (Bauer, 2015) states that ldquoHomocysteine and the MTHFR variant -- There is no clinical rationale for measurement of fasting plasma homocysteine levels or for assaying for presence of the MTHFR 677CmdashgtT variant in asymptomatic individuals, and we never order this testing to evaluate venous or arterial thrombosisrdquo. An UpToDate review on ldquoScreening for inherited thrombophilia in childrenrdquo (Raffini, 2015) states that ldquoThe strength of the association between each IT and the development of VTE varies, and they are often classified into ldquoweakrdquo or ldquostrongrdquo risk factors. Although there are numerous other inherited defects that have been described, none have gained widespread acceptance. Testing for polymorphisms in the methylene tetrahydrofolate reductase (MTHFR) gene is not indicated, because these extremely common polymorphisms do not frequently cause hyperhomocysteinemia, and they are not, by themselves, associated with VTErdquo. Furthermore, he American College of Medical Genetics and Genomics (2015) noted the following: ldquoDonrsquot order MTHFR genetic testing for the risk assessment of hereditary thrombophilia. The common MTHFR gene variants, 677CgtT and 1298AgtG, are prevalent in the general population. Recent meta-analyses have disproven an association between the presence of these variants and venous thromboembolismrdquo. choosingwisely. orgclinician-listsamerican-college-medical-genetics-genomics-mthfr-genetic-testing-for-hereditary-thrombophilia . Next-Generation Sequencing for the Diagnosis of Learning Disabilities in Children Beale and colleagues (2015) stated that learning disability (LD) is a serious and lifelong condition characterized by the impairment of cognitive and adaptive skills. Some cases of LD with unidentified causes may be linked to genetic factors. Next-generation sequencing (NGS) techniques are new approaches to genetic testing that are expected to increase diagnostic yield. These investigators described current pathways that involve the use of genetic testing collected stakeholder views on the changes in service provision that would need to be put in place before NGS could be used in clinical practice described the new systems and safeguards that would need to be put in place before NGS could be used in clinical practice and explored the cost-effectiveness of using NGS compared with conventional genetic testing. A research advisory group was established. This group provided ongoing support by e-mail and telephone through the lifetime of the study and also contributed face-to-face through a workshop. A detailed review of published studies and reports was undertaken. In addition, information was collected through 33 semi-structured interviews with key stakeholders. Next-generation sequencing techniques consist of targeted gene sequencing, whole-exome sequencing (WES) and whole-genome sequencing (WGS). Targeted gene panels, which are the least complex, are in their infancy in clinical settings. Some interviewees thought that during the next 3 to 5 years targeted gene panels would be superseded by WES. If NGS technologies were to be fully introduced into clinical practice in the future a number of factors would need to be overcome. The main resource-related issues pertaining to service provision are the need for additional computing capacity, more bioinformaticians, more genetic counsellors and also genetics-related training for the public and a wide range of staff. It is also considered that, as the number of children undergoing genetic testing increases, there will be an increase in demand for information and support for families. The main issues relating to systems and safeguards are giving informed consent, sharing unanticipated findings, developing ethical and other frameworks, equity of access, data protection, data storage and data sharing. There is little published evidence on the cost-effectiveness of NGS technologies. The major barriers to determining cost-effectiveness are the uncertainty around diagnostic yield, the heterogeneity of diagnostic pathways and the lack of information on the impact of a diagnosis on health care, social care, educational support needs and the wider family. Furthermore, as NGS techniques are currently being used only in research, costs and benefits to the National Health Services (NHS) are unclear. The authors concluded that NGS technologies are at an early stage of development and it is too soon to say whether they can offer value for money to the NHS as part of the LD diagnostic process. They stated that substantial organizational changes, as well as new systems and safeguards, would be needed if NGS technologies were to be introduced into NHS clinical practice and considerable further research is needed to establish whether using NGS technologies to diagnose learning disabilities is clinically effective and cost-effective. The Genetic Addiction Risk Score (GARSPREDXtrade) Blum and associates (2015) noted that the Brain Reward Cascade (BRC) is an interaction of neurotransmitters and their respective genes to control the amount of dopamine released within the brain. Any variations within this pathway, whether genetic or environmental (epigenetic), may result in addictive behaviors or reward deficiency syndrome (RDS), which was coined to define addictive behaviors and their genetic components. These investigators searched a number of important databases including: Filtered: Cochrane Systematic reviews DARE PubMed Central Clinical Quaries National Guideline Clearinghouse and unfiltered resources: PsychINFO ACP PIER PsychSage PubMedMedline. The major search terms included: dopamine agonist therapy for addiction dopamine agonist therapy for reward dependence dopamine antagonistic therapy for addiction dopamine antagonistic therapy for reward dependence and neurogenetics of RDS. While there are many studies claiming a genetic association with RDS behavior, not all are scientifically accurate. The authors concluded that albeit their bias, this Clinical Pearl discussed the facts and fictions behind molecular genetic testing in RDS and the significance behind the development of the Genetic Addiction Risk Score (GARSPREDXtrade), the first test to accurately predict ones genetic risk for RDS. The clinical value of the Genetic Addiction Risk Score (GARSPREDXtrade) has yet to be determined. Simms et al (2011) wrote that nephronophthisis (NPHP) is an autosomal recessive cystic kidney disease and a leading genetic cause of established renal failure in children and young adults early presenting symptoms in children with NPHP include polyuria, nocturia, or secondary enuresis, pointing to a urinary concentrating defect. The authors further note that NPHP is associated with extra renal manifestations in 10-15 of patients. The most frequent extrarenal association is retinal degeneration, leading to blindness. Increasingly, molecular genetic testing is being utilised to diagnose NPHP and avoid the need for a renal biopsy. Barisic et al (2015) stated that Meckel-Gruber syndrome is a rare autosomal recessive lethal ciliopathy characterized by the triad of cystic renal dysplasia, occipital encephalocele and postaxial polydactyly. The authors conducted the largest population-based epidemiological study to date using data provided by the European Surveillance of Congenital Anomalies network using a study population of 191 cases identified between January 1990 and December 2011 in 34 European registries. The mean prevalence was 2.6 per 100,000 births in a subset of registries. The investigators found that there were 145 (75.9) terminations of pregnancy after prenatal diagnosis, 13 (6.8) fetal deaths, and 33 (17.3) live births. In addition to cystic kidneys (97.7), encephalocele (83.8) and polydactyly (87.3), frequent features include other central nervous system anomalies (51.4), fibroticcystic changes of the liver (65.5 of cases with post mortem examination) and orofacial clefts (31.8). Most cases (90.2) are diagnosed prenatally at 14.3 plusmn 2.6 (range 11-36) gestational weeks and pregnancies are mainly terminated, reducing the number of live births to one-fifth of the total prevalence rate. Barisic et al concluded that early diagnosis is important for timely counseling of affected couples regarding the option of pregnancy termination and prenatal genetic testing in future pregnancies. Ece Solmaz et al (2015) stated that Bardet-Biedl syndrome (BBS) is characterized by obesity, rod-cone dystrophy, postaxial polydactyly, renal abnormalities, genital abnormalities and learning difficulties mutations in 21 different genes have been described as being responsible for BBS. Recently, sequential gene sequencing has been replaced by NGS applications. The investigators conducted a study in which 15 patients with clinically diagnosed BBS were investigated using an NGS panel including 17 known BBS causing genes (BBS1, BBS2, ARL6, BBS4, BBS5, MKKS, BBS7, TTC8, BBS9, BBS10, TRIM32, BBS12, MKS1, NPHP6, WDPCP, SDCCAG8, NPHP1). A genetic diagnosis was achieved in 13 patients (86.6) and involved 9 novel and 3 previously described pathogenic variants in 6 of 17 BBS causing genes also, three of the 13 patients had an affected sibling. The authors concluded that although limited association between certain genes and phenotypic features was observed in this study, additional studies are needed to better characterize the genotype-phenotype correlation of BBS. Nevertheless, the results demonstrate that NGS panels are a feasible and effective method for providing high diagnostic yields in the diseases caused by multiple genes such as BBS. Roosing et al (2015) reported that effective primary ciliogenesis or cilium stability forms the basis of human ciliopathies, including Joubert syndrome (JS). They evaluated patients with defective cerebellar vermis development and performed a high-content genome-wide small interfering RNA screen to identify genes regulating ciliogenesis as candidates for JS. The investigators identified 591 likely candidates intersection of this data with whole exome results from 145 individuals with unexplained JS identified six families with predominantly compound heterozygous mutations in KIAA0586. A c.428del base deletion in 0.1 of the general population was found with a second mutation in an additional set of 9 of 163 unexplained JS patients. Thus, the investigators concluded that KIAA0586 is an orthologue of chick Talpid3, required for ciliogenesis and sonic hedgehog signaling, uncovering a relatively high frequency cause for JS and contributing a list of candidates for future gene discoveries in ciliopathies. Suspitsin et al (2015) stated that at least 19 genes have been shown to be associated with BBS, and therefore, genetic testing is highly complicated. The authors used an Illumina MiSeq platform for WES analysis of a family with strong clinical features of BBS and found a homozygous c.19671968delTAinsC (p. Leu656fsX673 RefSeq NM176824.2) mutation in BBS7 in both affected children, while their healthy sibling and the non-consanguineous parents were heterozygous for this allele. Therefore, genotyping of 2,832 DNA samples obtained from Russian blood donors revealed 2 additional heterozygous subjects (0.07) with the c.19671968delTAinsC mutation. Wheway et al (2015) identified 112 candidate ciliogenesis and ciliopathy genes, including 44 components of the ubiquitin-proteasome system, 12 G-protein-coupled receptors, and 3 pre-mRNA processing factors (PRPF6, PRPF8 and PRPF31) mutated in autosomal dominant retinitis pigmentosa. Combining the screen with exome sequencing data identified recessive mutations in PIBF1, also known as CEP90, and C21orf2, also known as LRRC76, as causes of the ciliopathies Joubert and Jeune syndromes. The authorsrsquo approaches provide insights into ciliogenesis complexity and identify roles for unanticipated pathways in human genetic disease. Suspitsin et al.(2016) stated that BBS is a rare autosomal recessive genetic disorder, characterized by heterogeneous clinical manifestations including primary features of the disease such as rod-cone dystrophy, polydactyly, obesity, genital abnormalities, renal defects, learning difficulties and secondary BBS characteristics such as developmental delay, speech deficit, brachydactyly or syndactyly, dental defects, ataxia or poor coordination, olfactory deficit, diabetes mellitus, and congenital heart disease. A minimum of 20 BBS genes have already been identified, and all of them are involved in primary cilia functioning. Genetic diagnosis of BBS is complicated due to lack of gene-specific disease symptoms, but is gradually becoming more accessible with the invention of multigene sequencing technologies. Progress in DNA testing technologies is likely to rapidly resolve all limitations in BBS diagnosis however, much slower improvement is expected with regard to BBS treatment. CancerNexttrade (Ambry Genetics) utilizes next generation sequencing to offer a comprehensive testing panel for hereditary colon cancer and targets detection of mutations in 22 genes (APC, BMPR1A, CDH1, CHEK2, EPCAM, MLH1, MSH2, MSH6, MUTYH, PMS2, PTEN, SMAD4, STK11, and TP53) (Raman, et al. 2013). Gross deletionduplication analysis is performed for all 22 genes. CancerNexttrade is a next-generation cancer panel that simultaneously analyzes selected genes associated with a wide range of cancers. While mutations in each gene on this panel may be individually rare, they may collectively account for a significant amount of hereditary cancer susceptibility. OvaNext (Ambry Genetics) is a next generation (next-gen) sequencing panel that simultaneously analyses 19 genes that contribute to increased risk for breast, ovarian, andor uterine cancers (Raman, et al. 2013). The test is intended to determine if a woman has an increase chance of developing breast, ovarian, andor uterus cancer. BreastNext utilizes next generation sequencing to offer a comprehensive testing panel for hereditary breast andor ovarian cancer and targets detection of mutations in 14 genes (ATM, BARD1, BRIP1, CDH1, CHEK2,MRE11A, MUTYH, NBN, PALB2, PTEN, RAD50, RAD51C, STK11 and TP53), excluding BRCA1 and BRCA2 (Raman, et al. 2013). Gross deletionduplication analysis is performed for all 14 genes. Mutations in BRCA1 and BRCA2 explain hereditary breast cancer occurrence 25ndash50 of the time, additional genes associated with hereditary breast cancer are emerging. Studies suggest that mutations in the genes on the BreastNexttrade panel may confer an estimated 25ndash70 lifetime risk for breast cancer. An assessment prepared for the Australian Health Policy Advisory Committee on Technology (HealthPACT) (Mundy, 2013) noted that many of the genes included in BreastNext are not just associated with breast cancer and therefore a mutation in one of these genes may indicate an elevated risk of a number of different cancers, making a specific management and surveillance strategy difficult. Several of the genes included in the BreastNext panel are involved in DNA repair: CHEK2, ATM, BRIP1 and PALB2. These genes are associated with an estimated two-fold risk of breast cancer in women and have been shown by numerous studies to be rare in the population. The assessment noted that there appears to be a reluctance by Australian clinicians to use this technology due to potential difficulties in interpreting the significance of mutations in some of the genes included in the panel. The assessment noted that mutations detected in some of the genes included in the BreastNext panel may be considered amorphous in that they represent an increased risk, but there is little that may be done about that risk. In addition, the risk is not specific to one cancer, for example, a mutation in the Tp53 gene may represent an increased risk of cancer of the breast, colorectum or brain, making surveillance difficult. A positive test may therefore result in undue worry and stress on the patient. The review identified no peer-reviewed studies in the literature that describes the use of the BreastNext panel in women considered to have a predisposition to breast cancer. Ambry Genetics have a. number of conference proceedings published on their website, The first 400 BRCA1 and BRCA2-negative women to receive testing with BreastNext were reported to the 2013 conference of the U. S. National Consortium of Breast Centers (NCBC). The NCBC is an organisation of breast professionals, breast centres, providers of service to care providers, and corporations that supply equipment and pharmaceuticals to care providers. Of the 400 women, 41 (10) were found to have a mutation in the following genes: PALB2 (n9), ATM (n9), CHEK2 (n8), MUTYH (n4), BARD1 (n3), RAD50 (n3) and one each in pTEN, RAD51C, Tp53, MRE11A and NBN (level IV diagnostic evidence). However, the significance of these results and the implications for patients of these positive results were not discussed. The HealthPACT assessment noted that positive tests may be difficult to interpret if the genetic variant is not the known founder mutation. In addition, mutations may occur in genes of known importance, but whether these mutations result in functional changes resulting in consequences for the health of the individual may remain unknown. The HealthPACT assessment of BreastNext found that 21-gene and 14-gene panels detected a number of mutations in candidate genes, which may be of significance in women considered to be at a genetic predisposition to breast or ovarian cancer. The impact of these findings on patient outcomes was not discussed in any of the included papers. It may be assumed that these women and their first-degree relatives would then enter a surveillance programme. The HealthPACT assessment noted that, in Australia, first-degree relatives would be eligible to enter into such a programme even if testing for BRCA1 and BRCA2 was negative, therefore it is difficult to gauge the usefulness of tests such as BreastNext and OvaNext. The HealthPACT assessment concluded that, although some of the genes included in the BreastNext panel may be associated with breast cancer, there is a paucity of evidence linking all of the included mutations with the disease. There were no peer-reviewed studies identified that could demonstrate the clinical utility of this product and therefore the impact this product may have on clinical decision making cannot be determined. Of concern is that these tests may be accessed by women who may not require testing, and that this may have consequences for the public health system. Therefore it is recommended that no further research on behalf of HealthPACT is warranted at this time. ColoNexttrade (Ambry Genetics) utilizes next generation sequencing to offer a comprehensive testing panel for hereditary colon cancer and targets detection of mutations in 14 genes (APC, BMPR1A, CDH1, CHEK2, EPCAM, MLH1, MSH2, MSH6, MUTYH, PMS2, PTEN, SMAD4, STK11, and TP53) (Raman, et al. 2013). Gross deletionduplication analysis is performed for all 14 genes. ColoNexttrade is a next-generation cancer panel that simultaneously analyzes selected genes associated with a wide range of cancers. While mutations in each gene on this panel may be individually rare, they may collectively account for a significant amount of hereditary cancer susceptibility. ColoSeqtrade (University of Washington Laboratory Medicine Genetics Lab) is a comprehensive genetic test for hereditary colon cancer that uses next-generation sequencing to detect mutations in multiple genes associated with Lynch syndrome (hereditary non-polyposis colorectal cancer, HNPCC), familial adenomatous polyposis (FAP), MUTYH-associated polyposis (MAP), hereditary diffuse gastric cancer (HDGC), Cowden syndrome, Li-Fraumeni syndrome, Peutz-Jeghers syndrome, Muir-Torre syndrome, Turcot syndrome, and Juvenile Polyposis syndrome (Raman, et al. 2013). The assay sequences all exons, introns, and flanking sequences of the 13 genes. Large deletions, duplications, and mosaicism are also detected by the assay and reported. PANEXIAreg (Myriad Genetics) detects mutations in genes that result in an increased risk of pancreatic cancer, offering insight about the risk of future hereditary cancers for patients and their families (Raman, et al. 2013). PANEXIA, via a blood test, analyzes the PALB2 and BRCA2 genes, the two genes most commonly identified in families with hereditary pancreatic cancer. The PANEXIA test results provide information for patients and their family members about the inherited risks of pancreatic cancer as well as breast, ovarian, and other cancers. This knowledge may allow at-risk family members the opportunity to lower their risks for some of these cancers through surveillance, preventative options, or lifestyle choices. The test is intended to determine if a person has an increase risk of developing pancreatic andor breast cancer. The test determines the presence of the PALB2 and BRCA2 genes. The results of the test are intended to enable the development of a patient-specific medical management plan to reduce the risk of cancer. Amsterdam II criteria : At least 3 relatives must have an HNPCC-related cancer, and all of the following criteria must be present: At least 1 of the relatives with cancer associated with HNPC should be diagnosed before age 50 years and At least 2 successive generations must be affected and FAP should be excluded in the colorectal cancer cases (if any) and
No comments:
Post a Comment