Articles

4.3: Image and Kernel - Mathematics


Definition 4.2.0

The image of a homomorphism ( ho:G ightarrow H) is the set ({ ho(g) mid gin G}subset H), written ( ho(G)). The kernel of ( ho) is the set ({g mid gin G, ho(g)=1 }), written ( ho^{-1}(1)), where (1) is the identity of (H).

Let's try an example. Recall the homomorphism (phi:mathbb{Z} ightarrow mathbb{Z}), defined by (phi(n)=2n) for any (nin mathbb{Z}). The image of (phi) is the set of all even integers. Notice that the set of all even integers is a subgroup of (mathbb{Z}). The kernel of (phi) is just (0).

Here's another example. Consider the map (phi: mathbb{Z}_3 ightarrow mathbb{Z}_6) given by (phi(n)=2n). So (phi(0)=0), (phi(1)=2), and (phi(2)=4). This is actually a homomorphism (of additive groups): (phi(a+b) = 2(a+b) = 2a+2b =phi(a)+phi(b)). The image is the set ({0, 2, 4}), and, again, the kernel is just (0).

And another example. There's a homomorphism ( ho: mathbb{Z}_6 ightarrow mathbb{Z}_3) given by ( ho(a)=a%3) (divide by 3 and keep the remainder). Then ( ho(0)=0), ( ho(1)=1), ( ho(2)=2), ( ho(3)=0), ( ho(4)=1) and finally ( ho(5)=2). You can check that this is actually a homomorphism, whose image is all of (mathbb{Z}_3) and whose kernel is ({0, 3}).

So the image is the set of everything in (H) which has something in (G) which maps to it. The kernel is the set of elements of (G) which map to the identity of (H). The kernel is a subset of (G), while the kernel is a subset of (H). In fact, both are subgroups!

Proposition 4.2.1

The image ( ho(G)) is a subgroup of (H). The kernel ( ho^{-1}(1)) is a subgroup of (G).

To see that the kernel is a subgroup, we need to show that for any (g) and (h) in the kernel, (gh) is also in the kernel; in other words, we need to show that ( ho(gh)=1). But that follows from the definition of a homomorphism: ( ho(gh)= ho(g) ho(h)=1cdot 1=1). We leave it to the reader to find the proof that the image is a subgroup of (H).

Show that for any homomorphism ( ho: G ightarrow H), ( ho(G)) is a subgroup of (H).

We can use the kernel and image to discern important properties of ( ho) as a function.

Proposition 4.2.3

Let ( ho:G ightarrow H) be a homomorphism. Then ( ho) is injective (one-to-one) if and only if the kernel ( ho^{-1}(1)={1}).

Proof 4.2.4

If we assume ( ho) is injective, then we know (from the exercise in the last section) that ( ho^{-1}(1)={1}). For the reverse direction, suppose ( ho^{-1}(1)={1}), and assume (for contradiction) that ( ho) is not injective. Then there exist (x eq y) with ( ho(x)= ho(y)). But then ( ho(x) ho(y)^{-1}= ho(xy^{-1})=1). Since (x eq y), (xy^{-1} eq 1), giving a contradiction.

The kernel is actually a very special kind of subgroup.

Proposition 4.2.5

Let ( ho: G ightarrow H) be a homomorphism, and let (K) be the kernel of ( ho). Then for any (kin K) and (xin G), we have (xkx^{-1}in K).

Proof 4.2.6

The proof is a simple computation: ( ho(xkx^{-1}) = ho(x) ho(k) ho(x^{-1}) = ho(x)1 ho(x^{-1}) = 1). Therefore, (xkx^{-1}) is in the kernel of ( ho).


Kernel (algebra)

In algebra, the kernel of a homomorphism (function that preserves the structure) is generally the inverse image of 0 (except for groups whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special case is the kernel of a linear map. The kernel of a matrix, also called the null space, is the kernel of the linear map defined by the matrix.

The kernel of a homomorphism is reduced to 0 (or 1) if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective. [1]

For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and, sometimes, the possible kernels have received a special name, such as normal subgroup for groups and two-sided ideals for rings.

Kernels allow defining quotient objects (also called quotient algebras in universal algebra, and cokernels in category theory). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem) states that image of a homomorphism is isomorphic to the quotient by the kernel.

The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether an homomorphism is injective. In these cases, the kernel is a congruence relation.

This article is a survey for some important types of kernels in algebraic structures.


4.3: Image and Kernel - Mathematics

You can express the solution set as a linear combination of certain constant vectors in which the coefficients are the free variables.

and then one solves x+2y+3z = 0 (this is already reduced). The general solution is

   -2               -3
y    1    +    z     0
       0                1

span the kernel, clearly. They are independent because, each one, in the coordinate spot corresponding to the free variable which is its coefficient, has a 1, while the other vector(s) have a 0 in that spot.

So the vectors produced to span the kernel by this method are always a basis for the kernel, and the dimension of the kernel = number of free variables in solving AX = 0.

In getting a basis for the image one wants to pick out certain columns. The relations on the columns of the rref are the same as the relations on the columns of the original matrix. (Solutions of the equations again.) Therefore, if a set of columns of the rref is a basis for the image of the rref, the CORRESPONDING columns of the original matrix A are also a basis. One thing that always works is to use the pivot columns of the original matrix: these are the columns where the rref has leading ones.

The pivot columns are the first and third. This shows that the first and third columns of the original matrix are a basis for its image. HOWEVER, these two matrices do not have the same image.

The simplest example where a matrix A and its rref do not have the same image (column space) is when A =

The column space is the line spanned by that vector: the e_2 or y-axis.

and the column space is the line spanned by that one vector: the e_1 or x-axis.


RBF Kernel – Why it’s so Popular ?

In this section, we see RBF (Radial Basis Function) kernel, which has flexible representation and is mostly used in practical kernel methods.

RBF kernel is a kernel, which only depends on its norm .
Especially, the following form of kernel is called Gaussian kernel.

Note : It’s known that is a valid kernel function, if is a kernel function.
Gaussian kernel has infinite dimensionality.

In this section, I’ll show you how it fits to the real data and make you understand why this kernel (Parzen estimation) is so popular.
For simplicity, we discuss using previous linear regression at first

Now, to make things simple, let us assume the following binary classification of 2 dimension’s vector , and consider the possibility of errors.

As you can easily imagine, you will see the error values with large possibility when it’s near the boundary, and with less possibility when it’s far from the boundary. As you can see below, the possibility of errors will follow the 2-dimensional normal distribution (Gaussian distribution) depending on the distance (Euclidean norm) from boundary.

Note : For simplicity, here we’re assuming that two variables and are independent each other, then their covariance is equal to zero. And we also assume the standard deviation for and are both .
(i.e, the covariance matrix in Gaussian distribution is isotropic.)

On contrary, let us consider the error possibility of the following point.
As you see below, this will be affected by both upper side’s boundary and lower side’s boundary, and it will become the sum of both possibilities.

Note : If you simply add these possibilities (for upper-side and lower-side), the total possibilities will exceed 1. Thus, strictly speaking, you should normalize the sum of these possibilities.

Eventually the possibility of errors will be described as probability density distribution by the combination (sum and normalization) of normal distributions in each observed points.

In order to see this in the brief example, let us assume the following 1-dimensional sine curve and we have the following 6 observed points exact on this curve.

Then, by applying the following steps, we can estimate the original sine curve with these 6 points.

  1. Assume normal distributions (Gaussian distributions) for each 6 observed points.
  2. Get the weighted ratio for each distributions.
    For instance, we assume on in above picture (multiple Gaussian distributions). Then the weighted value for each elements on are :
    ,
    ,
    ,
    .
    The following picture shows the weighted plots of each distributions.
  3. Multiply by each observed values.
    For instance, if t-value of the first observed point is 5 (see above picture of sine curve), then the effect of this first value (black-colored line) on will be equal to . (See below picture.)
  4. Finally, sum all these values (i.e, these effects) for 6 elements on each points of x-axis.

Please remind that the predictive function by linear regression with basis function can be written as the linear combination between target values (t) and kernel functions. (See previous section.)
As a result, you can easily estimate original curve by Gaussian kernel using the given observed data as follows.
Gaussian kernel has rich representation and can fit to a various kind of formula.

Note : Here I showed a brief example using a simple 1-dimension sine curve, but see chapter 6.3.1 in “Pattern Recognition and Machine Learning” (Christopher M. Bishop, Microsoft) for the general steps of Nadaraya-Watson regression (kernel smoother).

in Gaussian is experimentally determined.
When is larger, the model will become more smoother. On contrary, when is smaller, the model is locally dominated by nearby observed values.

The value of standard deviation () is large

The value of standard deviation () is small

Note : When differs extremely in each points, you can use the estimation by kNN (K nearest neighbor) method in non-parametric approach.
Unlike parametric approach, these non-parametric ones will only fit near the observed data. (See my early post “Understand basics of Regression” for parametric regressions.)

Now let’s go back to the equation (6) and (7).

In these equations, the formula of is unknown, but we can expect that these are also estimated by Gaussian kernel, and then we can get optimal Lagrange multipliers under this assumptions.
As you saw above, when is smaller in Gaussian kernel, the hyperplane will also be increasingly dominated by nearby observed data relative to the distant ones.

Note : In general, a regression function which forms a linear combination of kernel by the training set and target values, , is called a linear smoother.
Here we got this form by intuitive thinking, but you can obtain the equivalent regression result by Bayesian inference (algebraic calculation) for a linear basis function.


Terms and conditions

While it may be possible to restore certain data backed up to your Google Account, apps and their associated data will be uninstalled. Before proceeding, please ensure that data you would like to retain is backed up to your Google Account.

Downloading of the system image and use of the device software is subject to the Google Terms of Service. By continuing, you agree to the Google Terms of Service and Privacy Policy. Your downloading of the system image and use of the device software may also be subject to certain third-party terms of service, which can be found in Settings > About phone > Legal information, or as otherwise provided.

Acknowledge I have read and agree with the above terms and conditions.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.


Grading

Your final score in the class is based on your achievement in each of the major topics of the course. In each area, I will assign you a score between 0 and 5 based my perception of the depth of your understanding of that concept:

  • To achieve a score of 1 (D), you should
    • know and state the concept's definition precisely.
    • recognize examples, and
    • perform computations correctly.
    • recognize situations where the concept applies, and
    • set up computations using the concept.
    • write formal proofs using the concept.
    • use the concept in sophisticated proofs.

    Your score in each topic is based on the following kinds of work:

    • Quizzes: Every topic will be quizzed at least once in class and at least once on an exam.
    • Homework: I will give a lengthy homework assignment on each topic. These can be revised.
    • Portfolios: In a portfolio, you write a summary of the important parts of the topic, identify and complete relevant problems. Porfolios can be submitted via D2L or presented to me in my office.

    Your score in each topic will be the average of the two highest scores from different categories above.

    Your final score will be the lowest of your topic scores, adjusted upwards by up to a point according to your overall average. Here is the actual formula:

    Here G is your final grade, A is the average of your topics scores, and M is the minimum of your topics scores. The number G is translated into a grade by a GPA scale.


    1. Introduction

    Women are a minority in STEM (science, technology, engineering and mathematics). According to the Organisation for Economic Co-operation and Development (OECD), women’s participation in “science, mathematics and computing” at the university level averages 39% worldwide. However, this figure varies with the country the highest ranked countries are Portugal (57%), Italy (53%), and Turkey (50%), while the United Kingdom (46%) and the United States (40%) rank just above the OECD average. Japan (25%) ranked below the average (OECD, 2017).

    In Japan, comparatively few women study physics and mathematics. The percentage of first-year female university students was only 16% in physics and 20% in mathematics (Ministry of Education, Culture, Sports, Science and Technology, 2018). A recent study in Japan asked 1086 members of the public how well 18 fields (including 12 STEM fields) were suited for women. The percentage of participants who answered “strongly agree” or “agree” that a field was suited was smallest in mechanical engineering. Physics ranked second from the bottom, and math ranked fourth from the bottom. These subjects were seen as unsuitable for girls in terms of gender, suggesting that the Japanese public considers STEM subjects as better suited to men than women (Ikkatai et al., 2020). Also, in the United Kingdom, physics (Archer et al., 2017 DeWitt et al., 2019 Francis et al., 2017) and mathematics (Smith, 2014) are perceived as masculine fields. In total, 92% of secondary school girls agreed that working in a STEM job would allow them to make a good living, but 67% believed that STEM jobs were male dominated, and 49% agreed that STEM jobs were hard to get for women (Cassidy et al., 2018). These results suggest that a masculine image of physics and mathematics could be preventing girls from choosing physics and mathematics, not only in Japan but also in the United Kingdom.


    Maggie graphed the image of a 90 counterclockwise rotation about vertex a of . coordinates b and c of are (2, 6) and (4, 3) and coordinates b’ and c’ of it’s image are (–2, 2) and (1, 4). what is the coordinate of vertex a. (explain work)

    Let the vertex A has coordinates

    Vectors AB and AB' are perpendicular, then

    Vectors AC and AC' are perpendicular, then

    Now, solve the system of two equations:

    Subtract these two equations:

    Substitute it into the first equation:

    Rotation by 90° counterclockwise about A(2,2) gives image points B' and C' (see attached diagram)


    Image Kernels

    An image kernel is a small matrix used to apply effects like the ones you might find in Photoshop or Gimp, such as blurring, sharpening, outlining or embossing. They're also used in machine learning for 'feature extraction', a technique for determining the most important portions of an image. In this context the process is referred to more generally as "convolution" (see: convolutional neural networks.)

    To see how they work, let's start by inspecting a black and white image. The matrix on the left contains numbers, between 0 and 255, which each correspond to the brightness of one pixel in a picture of a face. The large, granulated picture has been blown up to make it easier to see the last image is the "real" size.

    Let's walk through applying the following 3x3 <> kernel to the image of a face from above.

    Below, for each 3x3 block of pixels in the image on the left, we multiply each pixel by the corresponding entry of the kernel and then take the sum. That sum becomes a new pixel in the image on the right. Hover over a pixel on either image to see how its value is computed.

    One subtlety of this process is what to do along the edges of the image. For example, the top left corner of the input image only has three neighbors. One way to fix this is to extend the edge values out by one in the original image while keeping our new image the same size. In this demo, we've instead ignored those values by making them black.

    Here's a playground were you can select different kernel matrices and see how they effect the original image or build your own kernel. You can also upload your own image or use live video if your browser supports it.

    The sharpen kernel emphasizes differences in adjacent pixel values. This makes the image look more vivid.

    The blur kernel de-emphasizes differences in adjacent pixel values.

    The emboss kernel (similar to the sobel kernel and sometimes referred to mean the same) givens the illusion of depth by emphasizing the differences of pixels in a given direction. In this case, in a direction along a line from the top left to the bottom right.

    The indentity kernel leaves the image unchanged. How boring!

    The custom kernel is whatever you make it.

    sobel kernels are used to show only the differences in adjacent pixel values in a particular direction.

    An outline kernel (also called an "edge" kernel) is used to highlight large differences in pixel values. A pixel next to neighbor pixels with close to the same intensity will appear black in the new image while one next to neighbor pixels that differ strongly will appear white.

    For more, have a look at Gimp's excellent documentation on using Image kernel's. You can also apply your own custom filters in Photoshop by going to Filter -> Other -> Custom.


    Lesson 3

    In an earlier lesson, students were reminded of the connection between multiplication and division. They revisited the idea of division as a way to find a missing factor, which can either be the number of groups, or the size of one group.

    In this lesson, students interpret division situations in story problems that involve equal-size groups. They draw diagrams and write division and multiplication equations to make sense of the relationship between known and unknown quantities (MP2).

    Learning Goals

    Let’s explore situations that involve division.

    Learning Targets

    CCSS Standards

    Print Formatted Materials

    Teachers with a valid work email address can click here to register or sign in for free access to Cool Down, Teacher Guide, and PowerPoint materials.

    Additional Resources

    IM 6–8 Math was originally developed by Open Up Resources and authored by Illustrative Mathematics®, and is copyright 2017-2019 by Open Up Resources. It is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). OUR's 6–8 Math Curriculum is available at https://openupresources.org/math-curriculum/.

    Adaptations and updates to IM 6–8 Math are copyright 2019 by Illustrative Mathematics, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

    Adaptations to add additional English language learner supports are copyright 2019 by Open Up Resources, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

    The second set of English assessments (marked as set "B") are copyright 2019 by Open Up Resources, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

    Spanish translation of the "B" assessments are copyright 2020 by Illustrative Mathematics, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

    The Illustrative Mathematics name and logo are not subject to the Creative Commons license and may not be used without the prior and express written consent of Illustrative Mathematics.

    This site includes public domain images or openly licensed images that are copyrighted by their respective owners. Openly licensed images remain under the terms of their respective licenses. See the image attribution section for more information.