<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="https://www.mindfiretechnology.com/blog/rss/xslt"?>
<rss xmlns:a10="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Mindfire Technology</title>
    <link>https://www.mindfiretechnology.com/blog/</link>
    <description>Welcome to our blog, where we share technical and business knowledge based on real life experiences.</description>
    <generator>Articulate, blogging built on Umbraco</generator>
    <item>
      <guid isPermaLink="false">2326</guid>
      <link>https://www.mindfiretechnology.com/blog/archive/semantic-search-and-cosine-similarity/</link>
      <category>System.String[]</category>
      <title>Semantic Search and Cosine Similarity</title>
      <description>&lt;p&gt;In my last post, I talked about &lt;a href="https://www.mindfiretechnology.com/blog/archive/cosine-similarity/"&gt;Cosine Similarity&lt;/a&gt; and how to use it to find similarities between two vectors. We also talked about how we can convert two sentences into two vectors and then use Cosine Similarity to compare how similar two sentences are.&lt;/p&gt;
&lt;p&gt;Now let’s put our new knowledge to work and use it to do a ‘semantic search’ within an entire book. &lt;strong&gt;(1)&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;What is a Semantic Search?&lt;/h2&gt;
&lt;p&gt;So, what is a ‘semantic search’? Instead of searching a book for an exact matching word or phrase, we’ll create a way to search for similar ideas across synonyms. Or even being able to ask a question and find the best matching answer. &lt;/p&gt;
&lt;p&gt;First, we’ll need a book. Let’s pick something not too large, such as &lt;a href="https://www.gutenberg.org/ebooks/1404"&gt;The Federalist Papers off of Guttenberg Project&lt;/a&gt;. You can grab your own copy there or find it in my GitHub for my &lt;a href="https://github.com/brucenielson/Blog-Posts"&gt;blog posts&lt;/a&gt;. Look for “Federalist Papers.epub”. Or use a book of your own so long as it is in epub format.&lt;/p&gt;
&lt;h2&gt;Installing the Needed Software&lt;/h2&gt;
&lt;p&gt;First, install the software we’ll need for this post using the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pip install sentence_transformers
pip install ebooklib
pip install bs4
Once those are installed, you can import the following:
from sentence_transformers import SentenceTransformer
from ebooklib import epub, ITEM_DOCUMENT
from bs4 import BeautifulSoup
import NumPy as np
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you don’t already have NumPy installed, you can ‘pip install NumPy’ to install it.&lt;/p&gt;
&lt;h2&gt;Read In the Epub File&lt;/h2&gt;
&lt;p&gt;First, let’s write a function to read in the book’s epub file. &lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def epub_to_paragraphs(epub_file_path, min_words=0):
    paragraphs = []
    book = epub.read_epub(epub_file_path)

    for section in book.get_items_of_type(ITEM_DOCUMENT):
        paragraphs.extend(epub_sections_to_paragraphs(section, min_words=min_words))

    return paragraphs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This function takes the name of the file and ‘min_words’ which is the minimum number of words required to be in a paragraph or else we’ll strip it out. I found this useful to allow us to strip out pages or paragraphs that have no real content, like a title page.&lt;/p&gt;
&lt;p&gt;We then read in the epub file using “epub.read&amp;#95;epub”&lt;/p&gt;
&lt;p&gt;An epub file may contain several parts. We are interested in the actual text of the book (versus, say, the cover or the images) so we loop over “book.get&amp;#95;items&amp;#95;of&amp;#95;type(ITEM&amp;#95;DOCUMENT)” to get each section of the book.&lt;/p&gt;
&lt;h2&gt;Getting Paragraphs&lt;/h2&gt;
&lt;p&gt;But what we really want is not ‘sections’ but individual paragraphs. So let’s write this function:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def epub_sections_to_paragraphs(section, min_words=0):
    html = BeautifulSoup(section.get_body_content(), 'html.parser')
    p_tag_list = html.find_all('p')
    paragraphs = [
        {
            'text': paragraph.get_text().strip(),
            'chapter_name': ' '.join([heading.get_text().strip() for heading in html.find_all('h1')]),
            'para_no': para_no,
        }
        for para_no, paragraph in enumerate(p_tag_list)
        if len(paragraph.get_text().split()) &amp;gt;= min_words
    ]
    return paragraphs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This function uses the “BeautifulSoup” library that we installed to take each section and turn it into html. That then allows us to grab every &amp;lt;p&amp;gt; tag and that will be a paragraph. We also shave off the chapter name out of the heading. This is also where we filter out any paragraph that doesn’t have at least ‘min&amp;#95;words’. For now, we’ll just take everything, so min&amp;#95;word = 0.&lt;/p&gt;
&lt;h2&gt;Embeddings&lt;/h2&gt;
&lt;p&gt;To work the semantic search magic, we need to be able to encode the book and later the query into ‘embeddings’ which are the vectors we’ll perform the cosine similarity on. So let’s write two functions to do this for us:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def create_embeddings(texts, model):
    return model.encode([text.replace(&amp;quot;\n&amp;quot;, &amp;quot; &amp;quot;) for text in texts])
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This first function will take the text we want to encode as well as a model (that’s what sentence&amp;#95;transformers is) that will do the encoding. We are removing all newline characters on the fly so that we’re dealing only with the text itself.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def get_embeddings(model, paragraphs):
    texts = [para['text'] for para in paragraphs]
    return create_embeddings(texts, model)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This second function takes out paragraphs dictionary from the epub&amp;#95;sections&amp;#95;to&amp;#95;paragraphs function (above) and grabs only the text then passes it to create&amp;#95;embeddings.&lt;/p&gt;
&lt;h2&gt;Semantic Search&lt;/h2&gt;
&lt;p&gt;Finally, we need a cosine similarity function (from our previous blog post):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def cosine_similarity(query_embedding, embeddings):
    dot_products = np.dot(embeddings, query_embedding)
    query_magnitude = np.linalg.norm(query_embedding)
    embeddings_magnitudes = np.linalg.norm(embeddings, axis=1)
    cosine_similarities = dot_products / (query_magnitude * embeddings_magnitudes)
    return cosine_similarities
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And finally, here is the function do to the actual semantic search:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def semantic_search(model, embeddings, query, top_results=5):
    query_embedding = create_embeddings([query], model)[0]
    scores = cosine_similarity(query_embedding, embeddings)
    results = np.argsort(scores)[::-1][:top_results].tolist()
    return results
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This function takes the model (the sentence&amp;#95;transformer), the embeddings vector we created, and a query (the search query), as well as how many top answers to return. We then embed/encode the query then we run a cosine similarity against each paragraph of the book and take the top matches.&lt;/p&gt;
&lt;h2&gt;Putting It All Together&lt;/h2&gt;
&lt;p&gt;With these functions in place, now we can write our code to do the semantic search:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def test_semantic_search():
    paragraphs = epub_to_paragraphs(r&amp;quot;Federalist Papers.epub&amp;quot;, min_words=3)
    model = SentenceTransformer('sentence-transformers/multi-qa-mpnet-base-dot-v1')
    embeddings = get_embeddings(model, paragraphs)

    query = 'Are we a democracy or a republic?'
    results = semantic_search(model, embeddings, query)

    print(&amp;quot;Top results:&amp;quot;)
    for result in results:
        para_info = paragraphs[result]
        chapter_name = para_info['chapter_name']
        para_no = para_info['para_no']
        paragraph_text = para_info['text']
        print(f&amp;quot;Chapter: '{chapter_name}', Passage number: {para_no}, Text: '{paragraph_text}'&amp;quot;)
        print('')
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let’s go over this in detail:&lt;/p&gt;
&lt;p&gt;Load the book: &lt;/p&gt;
&lt;pre&gt;&lt;code&gt;paragraphs = epub_to_paragraphs(r&amp;quot;Federalist Papers.epub&amp;quot;, min_words=3)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Load the hugging face model to do the embeddings (turn the book into vectors so that we can do the cosine similarity) and then do the embeddings:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;model = SentenceTransformer('sentence-transformers/multi-qa-mpnet-base-dot-v1')
embeddings = get_embeddings(model, paragraphs)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Do the actual Semantic Search:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;query = 'Are we a democracy or a republic?'
results = semantic_search(model, embeddings, query)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So here we’re asking the question “Are we a democracy or a republic?” and searching for the best answer out of the Federalist Papers. Here was the top result I got:&lt;/p&gt;
&lt;p&gt;Passage number: 56, Text: 'A republic, by which I mean a government in which the scheme of representation takes place, opens a different prospect, and promises the cure for which we are seeking. Let us examine the points in which it varies from pure democracy, and we shall comprehend both the nature of the cure and the efficacy which it must derive from the Union.'&lt;/p&gt;
&lt;p&gt;That’s a spot-on result! &lt;/p&gt;
&lt;p&gt;You can find the entire code at my &lt;a href="https://github.com/brucenielson/Blog-Posts"&gt;GitHub&lt;/a&gt;. The file “&lt;a href="https://github.com/brucenielson/Blog-Posts/blob/main/blog_semantic_search.py"&gt;&lt;code&gt;blog_semantic_search.py&lt;/code&gt;&lt;/a&gt;” contains all the code above. Give it a run and see how it works!&lt;/p&gt;
&lt;h2&gt;How (Why?) Does it Work?&lt;/h2&gt;
&lt;p&gt;One question you might ask is how and why does this actually work? In our last blog post we just basically counted words up to determine a similarity. But this is clearly doing something far more sophisticated. The answer is that sentence&amp;#95;transformer embeds words into a vector such that synonyms and related concepts are closer to each other in the ‘vector space’. That is how and why we get such good results out of a semantic search like this using so very little code.&lt;/p&gt;
&lt;p&gt;For those interested in pursuing this further, I have a fuller featured version of Semantic Search available in a different &lt;a href="https://github.com/brucenielson/Blog-Posts/blob/main/ebook_semantic_search.py"&gt;GitHub repo&lt;/a&gt;. This version includes loading PDFs (which I’ll cover in a future blog post) and offers a lot more options about how to break down the text into pages or paragraphs.&lt;/p&gt;
&lt;p&gt;I also have a &lt;a href="https://colab.research.google.com/drive/1k3llRQdzVH68wBUhoaOlYDZBQPqsetXQ?usp=sharing"&gt;Google Colab&lt;/a&gt; with the same code to try out.&lt;/p&gt;
&lt;p&gt;Notes:
(1) With thanks to Dwarkesh Patel from the Dwarkesh podcast for the idea for this blog post. His &lt;a href="https://colab.research.google.com/drive/1PDT-jho3Y8TBrktkFVWFAPlc7PaYvlUG?usp=sharing"&gt;Google Colab&lt;/a&gt;. &lt;/p&gt;
</description>
      <pubDate>Tue, 19 Mar 2024 09:00:00 -0600</pubDate>
      <a10:updated>2024-03-19T09:00:00-06:00</a10:updated>
    </item>
    <item>
      <guid isPermaLink="false">2320</guid>
      <link>https://www.mindfiretechnology.com/blog/archive/cosine-similarity/</link>
      <category>System.String[]</category>
      <title>Cosine Similarity</title>
      <description>&lt;p&gt;In this blog post I’m going to answer the question I know has been burning in your mind: What is Cosine Similarity and how does it affect your life?&lt;/p&gt;
&lt;p&gt;Yes, I know you’ve all been hearing about Cosine Similarity everywhere! You can hardly go 15 minutes before someone brings it up in casual conversation acting like you should know what it is. You feel stupid that you don’t know what they are taking about. You are too ashamed to admit your ignorance.&lt;/p&gt;
&lt;p&gt;Well, fear not dear reader! After reading this article you’ll be able to end those feelings of foolishness and be able to join in on the conversation!&lt;/p&gt;
&lt;p&gt;Wikipedia defines Cosine Similarity as:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.mindfiretechnology.com/blog/media/cosinesimilaritypicture1.png" alt="Image depicting the formula of cosine similarity, credited to Wikipedia." /&gt;&lt;/p&gt;
&lt;p&gt;There you go! Now you’ve been educated! Do you feel better? No?&lt;/p&gt;
&lt;p&gt;Okay, let’s use an example to make it easier to make sense of.&lt;/p&gt;
&lt;p&gt;Let’s say we want to measure how similar two sentences are. Let’s use the following sentence:&lt;/p&gt;
&lt;p&gt;“This blog post sucks”&lt;/p&gt;
&lt;p&gt;Vs&lt;/p&gt;
&lt;p&gt;“This blog post is awesome”&lt;/p&gt;
&lt;p&gt;There are various ways we might represent this sentence mathematically so that a Machine Learning model can make sense of it; but let’s stick with something pretty simple. We’ll imagine the following numbered list:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;This&lt;/li&gt;
&lt;li&gt;Blog&lt;/li&gt;
&lt;li&gt;Post&lt;/li&gt;
&lt;li&gt;Is&lt;/li&gt;
&lt;li&gt;Sucks&lt;/li&gt;
&lt;li&gt;Awesome&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So, let’s imagine a ‘vector’ (which is really just an ordered list) where we put a 1 if the word exists in the sentence and a zero if it doesn’t. So:&lt;/p&gt;
&lt;p&gt;This blog post sucks = [1, 1, 1, 0, 1, 0]&lt;/p&gt;
&lt;p&gt;This blog post is awesome = [1, 1, 1, 1, 0, 1]&lt;/p&gt;
&lt;p&gt;Here it should be obvious we’re only comparing words in the sentence and not the order between the words. This should be obvious if we realize this:&lt;/p&gt;
&lt;p&gt;Post this blog sucks = [1, 1, 1, 0, 1, 0] just like “This blog post sucks”&lt;/p&gt;
&lt;p&gt;If we were doing this for real, perhaps we might want to take order into consideration since clearly the order of words matters to the meaning of a sentence. When we drop the order of the words like this, we call this a “Bag of Words”. For this simple example, we’re going to ignore order of words because otherwise the math becomes too difficult to keep it simple.&lt;/p&gt;
&lt;p&gt;So, now we have two ‘vectors’:&lt;/p&gt;
&lt;p&gt;[1, 1, 1, 0, 1, 0] and [1, 1, 1, 1, 0, 1]&lt;/p&gt;
&lt;p&gt;How ‘similar’ are these two vectors? You could probably come up with some sort of measure if you stopped and thought about it, and there is no one ‘True’ way to measure the similarity and difference of these two vectors. &lt;/p&gt;
&lt;p&gt;But here is a clever idea: Let’s treat this vector as if it was a geometrical vector and then measure the angle of difference between them. &lt;/p&gt;
&lt;p&gt;To see why this works, let’s imagine two vectors on a 2D plane (since it is hard to imagine six-dimensional space like our bag of words example – though a computer doesn’t care how many dimensions it is calculating in).&lt;/p&gt;
&lt;p&gt;Let’s imagine two unit vectors (basically two line segments of length 1) on a grid. The first is at 45 degrees and the second is at 76 degrees:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.mindfiretechnology.com/blog/media/cosinesimilaritypicture2.png" alt="A line chart depicting two vectors, one at 45 degrees and another at 75 degrees. The Y-axis starts at 0.0 and ends at 1.0 in degrees of 0.2. The X-axis starts at 0.0 and ends at 0.7 in degrees of 0.1." /&gt;&lt;/p&gt;
&lt;p&gt;How might we measure the ‘similarity’ between these two lines?&lt;/p&gt;
&lt;p&gt;One obvious idea is to measure how far they are rotated from each other. That is to say:&lt;/p&gt;
&lt;p&gt;75-45 = 30&lt;/p&gt;
&lt;p&gt;Now take the Cosine of that to place it as a value between 0 and 1:&lt;/p&gt;
&lt;p&gt;Cos(30) = 0.8660254&lt;/p&gt;
&lt;p&gt;Or in other words these lines are 86.7% the same. &lt;strong&gt;(1)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Just to prove the point, let’s try this again with 45 degrees and 100 degrees or:&lt;/p&gt;
&lt;p&gt;Cos(100-45) = 0.57357644&lt;/p&gt;
&lt;p&gt;So those are not as similar. &lt;/p&gt;
&lt;p&gt;Okay, but how can we use this same idea with comparing sentences?&lt;/p&gt;
&lt;p&gt;Well to the computer a vector can be treated as a line in, in this case, 6-dimensional space. So, we just calculate the cosine between those two vectors, and it will effectively be a rating of how similar the two vectors are.&lt;/p&gt;
&lt;p&gt;Let’s now break down that intimidating Wikipedia formula and work this out for our vectors so that we can compare sentences. Here is some python code that does the job:
import NumPy as np&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def cosine_similarity(x, y):
    assert x.shape[0] == y.shape[1], &amp;quot;Dimension mismatch: x vector size should match the number of columns in y&amp;quot;
    dot_products = np.dot(y, x)
    x_magnitude = np.linalg.norm(x)
    y_magnitudes = np.linalg.norm(y, axis=1)
    cosine_similarities = dot_products / (x_magnitude * y_magnitudes)

    return cosine_similarities
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let’s walk through this one part at a time. This from the Wikipedia formula:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.mindfiretechnology.com/blog/media/cosinesimilaritypicture3.png" alt="A*B" /&gt;&lt;/p&gt;
&lt;p&gt;Is equivalent to this from the python code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;dot_products = np.dot(y, x)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We want to take the dot product of the two vectors. You can look up, but it’s a function built-in to NumPy, so don’t worry too much about what it is. It’s a fairly standard matrix operation. Note that this dot operation will only work if the number of rows of x matches the number of columns of y. Thus, our assertion checking for that.
Then this from the Wikipedia formula:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.mindfiretechnology.com/blog/media/cosinesimilaritypicture4.png" alt="||A||||B||" /&gt;&lt;/p&gt;
&lt;p&gt;It is saying take the of the vectors and multiply them together. Magnitude (Do you recall how to do that from high school geometry? Magnitude is exactly the same as finding the length of a line except you are doing it in any number of dimensions). Again, this is built-in to NumPy so:&lt;/p&gt;
&lt;p&gt;x_magnitude = np.linalg.norm(x)&lt;/p&gt;
&lt;p&gt;y_magnitudes = np.linalg.norm(y, axis=1)&lt;/p&gt;
&lt;p&gt;Finally, we take that dot product and divide it by the multiplied magnitudes:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.mindfiretechnology.com/blog/media/cosinesimilaritypicture5.png" alt="A*B/||A||||B||" /&gt;&lt;/p&gt;
&lt;p&gt;Which is this in the python code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cosine_similarities = dot_products / (x_magnitude * y_magnitudes)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you really wanted to not use the built-in functions in NumPy and calculate it out here is what the revised python code would look like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import math

def cosine_similarity(x, y):

    assert len(x) == len(y), &amp;quot;Dimension mismatch: Vectors must have the same length&amp;quot;

    dot_products = sum(xi * yi for xi, yi in zip(x, y))
    x_magnitude = math.sqrt(sum(xi ** 2 for xi in x))
    y_magnitude = math.sqrt(sum(yi ** 2 for yi in y))

    cosine_similarity = dot_products / (x_magnitude * y_magnitude) if x_magnitude * y_magnitude != 0 else 0

    return cosine_similarity
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And there you go- we now have a function by which to calculate the cosine between two vectors. Let’s actually run it on our simple example. Recall our two vectors were:&lt;/p&gt;
&lt;p&gt;[1, 1, 1, 0, 1, 0] and [1, 1, 1, 1, 0, 1]&lt;/p&gt;
&lt;p&gt;So take the dot product:&lt;/p&gt;
&lt;p&gt;1*1 + 1*1 + 1*1 + 0*1 + 1*0 + 0*1 = 3&lt;/p&gt;
&lt;p&gt;And take the magnitudes:&lt;/p&gt;
&lt;p&gt;Sqrt(1^2 + 1^2 + 1^2 + 0^2 + 1^2 + 0^2) = Sqrt(4) = 2&lt;/p&gt;
&lt;p&gt;Vs&lt;/p&gt;
&lt;p&gt;Sqrt(1^2 + 1^2 + 1^2 + 1^2 + 0^2 + 1^2) = Sqrt(5) = 2.24&lt;/p&gt;
&lt;p&gt;The result:&lt;/p&gt;
&lt;p&gt;3 / (2 * 2.24) = 3 / (4.48) = 0.6696&lt;/p&gt;
&lt;p&gt;And our python code calculates: 0.67 due to rounding errors, but is basically the same.&lt;/p&gt;
&lt;p&gt;So, these two sentences (as measured via a bag of words) are 67% the same.&lt;/p&gt;
&lt;p&gt;The beauty of cosine similarity is that you can use it on anything that you can represent as vectors! This will turn out to be really useful in our next post where we tackle using cosine similarity with Large Language Models (LLMs).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Note that I’m actually sort of lying here. To be considered 86.7% the same I’m making an additional assumption that we’re specifically talking about Cosine similarity for text. Cosines actually range not from 0 to 1 but from -1 to 1. So, a Cosine of 0.867 can’t really be considered to mean 86.7% ‘the same.’ However, text frequency can’t be negative. So, we’d expect the results to range from 0 to 1. So, within that context we can think of results as a percentage of similarity.&lt;/p&gt;
&lt;p&gt;Also be sure to checkout my next article addressing how to use &lt;a href="https://www.mindfiretechnology.com/blog/archive/semantic-search-and-cosine-similarity/"&gt;Cosine Similarity for Semantic Searches&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To stay in the loop, make sure to follow us on &lt;a href="https://www.linkedin.com/company/2877971/"&gt;LinkedIn&lt;/a&gt;, and also be sure to have a look at our other articles here on the Mindfire Blog.&lt;/p&gt;
</description>
      <pubDate>Tue, 12 Mar 2024 09:00:00 -0600</pubDate>
      <a10:updated>2024-03-12T09:00:00-06:00</a10:updated>
    </item>
  </channel>
</rss>